Capstone Project

Getting the input data

Datasets and Inputs

Daily stock prices the last five years for each stock (if available, some of the companies don’t have that long history on the stock market) will be extracted from Yahoo! Finance automatically. Stocks that don’t have three years of history will be excluded. In the case of any of the stocks are missing, the stock data will not be extracted from other sources to not complicate the matter. After the automatic download, the data will be stored in .csv files, one for each stock, where the information later can be extracted and as an extra backup.

The reason for this is that Yahoo! Finance has turned out to be less reliable, with less consistent data, problems in downloading it and closing its API service, after the acquisition by Verizon in 2016. Other sources such as Google, Quandl, Alpha vantage, Nordnet, Avanza, and some other Swedish alternatives, have been consulted but with even worse alternatives.

The data will include information such as date, open and closing price for each particular day, high and low, volume, as well as adjusted closing price. The latter is adjusted so that it takes into account changes in the stock price due to splits and dividends, while the rest only is adjusted to splits. It is crucial that the input data takes into account splits, because a stock’s price can be reduced (or increased) ten times through a split without having an effect on the company’s value. Running an algorithm on such data would probably result in worthless results.

In [1]:
####### For reproducible results
#from numpy.random import seed
#seed(1)
#from tensorflow import set_random_seed
#set_random_seed(2)
#######

import pandas as pd
import copy
import datetime
from pandas_datareader import data
import time
from retrying import retry
import matplotlib.pyplot as plt
from matplotlib.ticker import MaxNLocator, IndexFormatter
import numpy as np
import math
import operator

##### Introduce some useful functions ####

# Yahoo! Finance isn't that reliable and might throw a RemoteDataError if we try to get the data too fast. 
# For this reason, add a 

#@retry function to minimize the risk of RemoteDataError. 10 retries, with one
# second pause in between each run.
@retry(stop_max_attempt_number=10)
def get_stock_data(ticker, dates):
    try:
        #stock = pd.DataFrame(index=dates) #
        stock = data.get_data_yahoo(ticker, start, end)
        stock.sort_index(ascending=False, inplace=True)
        return stock
    except RemoteDataError:
        time.sleep(1)
        print('Trying again..')
        
def create_file_path(ticker):
    """Create a file path to store file(s)"""
    base = '/Users/jakob/Desktop/Programming/Udacity Machine Learning Nano Degree/Capstone Project/Data_Capstone/'
    return(base + ticker + '.csv')   

def fill_missing_values(data_df):
    """Fill missing values forward, then backwards"""
    data_df.fillna(method="ffill", inplace=True)
    data_df.fillna(method="bfill", inplace=True)
    
In [2]:
start = datetime.datetime(2012,12,27)
end = datetime.date.today()
dates = pd.date_range(start, end)

stock_names = ['OMX Stockholm 30', 'Acando B', 'Addnode Group B', 'Addtech B', 'Africa Oil', 
              'AQ Group', 'Arcam', 'Beijer Alma', 'Beijer Ref', 'BioGaia B', 'Biotage', 
              'BlackPearl R. Inc.','Bulten', 'Bure Equity','Byggmax','Catella A','Catella B', 
              'Catena', 'Cavotec SA','CellaVision', 'Clas Ohlson B', 'Cloetta B', 
              'Concentric', 'Creades A', 'Diös Fastigheter', 'Duni', 'Elanders B', 
              'EnQuest PLC', 'Fagerhult', 'Fast Partner','G5 Entertainment', 'Gunnebo', 
              'Haldex', 'Hansa Medical', 'Heba B', 'HiQ International', 'HMS Networks', 
              'IAR Systems', 'INVISIO Communications', 'Kabe B', 'KappAhl', 'Karo Pharma',
              'Knowit','Lindab International','Lucara Diamond Corp.','Medivir B','Mekonomen', 
              'Midsona A', 'Midsona B', 'Mycronic', 'Nederman Holding', 'Net Insight B', 
              'New Wave B', 'Nolato B', 'OEM International B', 'Opus Group','Orexo', 'Probi', 
              'Qliro Group', 'RaySearch Laboratories B', 'Rezidor Hotel Group', 'SAS', 
              'Semafo', 'SkiStar B', 'Starbreeze B','Swedol B','Systemair','Tethys Oil', 
              'Traction B', 'VBG Group B','Vitrolife','Xvivo Perfusion','Öresund Investment']

tickers = ['^OMX', 'ACAN-B.ST', 'ANOD-B.ST', 'ADDT-B.ST', 'AOI.ST', 'AQ.ST', 'ARCM.ST', 
           'BEIA-B.ST', 'BEIJ-B.ST', 'BIOG-B.ST', 'BIOT.ST', 'PXXS-SDB.ST','BULTEN.ST',
           'BURE.ST', 'BMAX.ST', 'CAT-A.ST', 'CAT-B.ST', 'CATE.ST', 'CCC.ST', 'CEVI.ST', 
           'CLAS-B.ST', 'CLA-B.ST', 'COIC.ST', 'CRED-A.ST', 'DIOS.ST', 'DUNI.ST', 'ELAN-B.ST', 
           'ENQ.ST', 'FAG.ST', 'FPAR.ST', 'G5EN.ST', 'GUNN.ST', 'HLDX.ST', 'HMED.ST', 
           'HEBA-B.ST', 'HIQ.ST', 'HMS.ST', 'IAR-B.ST', 'IVSO.ST', 'KABE-B.ST', 'KAHL.ST', 
           'KARO.ST', 'KNOW.ST', 'LIAB.ST', 'LUC.ST', 'MVIR-B.ST','MEKO.ST', 'MSON-A.ST', 
           'MSON-B.ST', 'MYCR.ST', 'NMAN.ST', 'NETI-B.ST', 'NEWA-B.ST','NOLA-B.ST','OEM-B.ST', 
           'OPUS.ST', 'ORX.ST', 'PROB.ST', 'QLRO.ST', 'RAY-B.ST', 'REZT.ST', 'SAS.ST',
           'SMF.ST', 'SKIS-B.ST', 'STAR-B.ST', 'SWOL-B.ST', 'SYSR.ST', 'TETY.ST', 'TRAC-B.ST', 
           'VBG-B.ST', 'VITR.ST', 'XVIVO.ST', 'ORES.ST']

Download and save the data into .csv files

Many of the stocks are missing data values. To fix this, pandas' inbuilt functions 'fill forward' and 'fill backwards' are used. They are used and defined in the function fill_missing_values() above. This is a common approach when dealing with incomplete times series data.

In [ ]:
### Download and save the data into .csv files ###

#time1 = time.time()    
      
#for i in tickers:
#    stock_df = get_stock_data(i, dates)
    
#    # Fill missing values forward, then, fill backward
#    fill_missing_values(stock_df)
    
#    # Save the files as .csv as well
#    stock_df.to_csv(create_file_path(i)) 
    
    
#print("Total time to download the data: {0:0.0f} s".format(time.time() - time1))
In [ ]:
#display(stock_df.head())

Import the data from the .csv files (if they already are there).

What's happening below? First, all the .csv files are loaded into dataframes and stored in a list called file_names. Each dataframe contains all the data for one stock. Consequently, each position in the list contains the stock data for one stock. Further on, the stock name is added as a new column in each stock dataframe.

Set the index column to the Date column when using .read_csv. This will later facilitate when normalizing the data.

In [3]:
import os
from glob import glob


# Count the number of files in the input data directory
directory = '/Users/jakob/Desktop/Programming/Udacity Machine Learning Nano Degree/Capstone Project/Data_Capstone/'
file_paths = glob(directory+"*.csv")  # Get each .csv file in the directory

# Get all the file names
file_names = []
for root, dirs, files in os.walk(directory):  
    for filename in files:
        filename = filename[:-4]   # Just keep the ticker name, without the .csv file extention
        file_names.append(filename)
del file_names[0]
        
# Define a common index for all dataframes
m = pd.read_csv(file_paths[0], index_col='Date')
glob_index = m.index

# Get the input data from the .csv files
loaded_stocks = []
for i in range(len(file_paths)):
    stock = pd.read_csv(file_paths[i], index_col='Date')
    stock['Volatility'] = (stock['High'] - stock['Low']) / stock['Open']  # Calculate the volatility
    stock.index.names = [file_names[i] + '__' + 'Date']                   # Change the index name to stock name + Date
    loaded_stocks.append(stock)

dim = loaded_stocks[1].shape    
print("Total amount of input data points: {0}".format(dim[0] * dim[1] * len(loaded_stocks)))
print("Number of stocks: ", len(loaded_stocks))
#print(this_stock.to_string())   # print the entire dataframe
display(loaded_stocks[1].head())


#print(loaded_stocks[1].index.name[:-6])
Total amount of input data points: 665322
Number of stocks:  73
Open High Low Close Adj Close Volume Volatility
ACAN-B.ST__Date
2018-03-02 28.549999 28.950001 28.150000 28.700001 28.700001 122849 0.028021
2018-03-01 29.549999 29.600000 29.000000 29.000000 29.000000 120838 0.020305
2018-02-28 29.799999 29.799999 29.500000 29.650000 29.650000 60613 0.010067
2018-02-27 29.700001 30.000000 29.700001 29.850000 29.850000 53990 0.010101
2018-02-26 29.500000 30.000000 29.500000 29.650000 29.650000 71179 0.016949

Normalize the stock data according to the first date (2012-12-27) in each stock dataframe.

Display the top and bottom five values, both unchanged and normalized.

In [4]:
def normalize_data(prices):
    """ Normalize data stored in prices"""
    if isinstance(prices, pd.DataFrame): # Check if Dataframe
        prices = prices/prices.iloc[-1]#normalize according to the first date value (which now is in the end of the df)
    else:                                # if array
        prices = prices/prices[-1]     
    return prices  

# Normalize all the stock prices
#col = ['Open', 'High', 'Low', 'Close', 'Adj Close']
norm_stock_prices = []
for i in loaded_stocks:
    norm_d = normalize_data(i)
    norm_d[norm_d == np.inf] = 0          # if any of the value in norm_d is an infinite value, set it equal to 0
    fill_missing_values(norm_d)
    norm_d = norm_d[~norm_d.index.duplicated(keep='last')]  # Remove duplicated indices (if any)    
    norm_stock_prices.append(norm_d)

#display(loaded_stocks[1].head(5), loaded_stocks[1].iloc[-5:, :])
display(norm_stock_prices[1].head(), norm_stock_prices[1].iloc[-5:, :])

print(len(norm_stock_prices[1]))
Open High Low Close Adj Close Volume Volatility
ACAN-B.ST__Date
2018-03-02 2.130597 1.942953 2.100746 1.926175 2.631674 2.732224 0.250321
2018-03-01 2.205224 1.986577 2.164179 1.946309 2.659183 2.687499 0.181387
2018-02-28 2.223881 2.000000 2.201493 1.989933 2.718785 1.348064 0.089933
2018-02-27 2.216418 2.013423 2.216418 2.003356 2.737124 1.200765 0.090235
2018-02-26 2.201493 2.013423 2.201493 1.989933 2.718785 1.583057 0.151412
Open High Low Close Adj Close Volume Volatility
ACAN-B.ST__Date
2013-01-04 1.104478 1.006711 1.074627 1.000000 1.000000 0.095434 0.362162
2013-01-03 1.104478 1.006711 1.100746 1.006711 1.006711 0.042079 0.150901
2013-01-02 1.149254 1.033557 1.037313 1.006711 1.006711 0.455063 0.870130
2012-12-28 1.082090 1.033557 1.044776 1.033557 1.033557 3.355092 0.862529
2012-12-27 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000
1302
In [5]:
#for tick in tickers:
#    display(get_stock(norm_stock_prices, tick).head())
In [6]:
# Define some more useful functions.

def get_stock(stock_list ,ticker):
    """ Returns the dataframe containing the stock with the ticker symbol ticker. 
        stock_list is a list of stock data stored in dataframes. """
    for sd in stock_list:
        if sd.index.name[:-6] == ticker:
            return sd            

def get_ticker(DataFrame):
    """Return the ticker symbol for the stock in DataFrame"""
    return DataFrame.index.name[:-6]
  

# Just check so that it works as intended
starbreeze_df = get_stock(loaded_stocks ,'STAR-B.ST')
display(starbreeze_df.head()) 
Open High Low Close Adj Close Volume Volatility
STAR-B.ST__Date
2018-03-02 9.940 10.18 9.895 9.960 9.960 131028.0 0.028672
2018-03-01 10.500 10.60 9.955 9.955 9.955 791073.0 0.061429
2018-02-28 9.705 10.59 9.705 10.490 10.490 1284906.0 0.091190
2018-02-27 9.660 9.91 9.600 9.895 9.895 987261.0 0.032091
2018-02-26 9.640 9.79 9.605 9.655 9.655 468709.0 0.019191

Define functions for plotting the dataframes. There might exist better and easier solutions, but in order to get acceptable plots, quite some adjustments are needed.

In [7]:
from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes
from mpl_toolkits.axes_grid1.inset_locator import mark_inset


# Define a function for plotting a dataframe
def plot_it(data_df, title='', xlabel='', ylabel='', legend=''):
    """Plot the stock stored in data_df, title is the plot title"""
    plt.figure(figsize=(10, 5))
    pl = data_df.loc[::-1].plot(fontsize=12, figsize=(13, 5))
    
    pl.set_title(label=title, fontsize=20)
    pl.set_xlabel(xlabel, fontsize=15) 
    plt.autoscale(enable=True, axis='x', tight=True)    
    pl.set_ylabel(ylabel, fontsize=15)
    plt.legend(fontsize=12, loc='upper left') # [legend],
    plt.grid(axis='both', alpha=.5)
    pl.xaxis.set_major_locator(MaxNLocator(12))
    pl.xaxis.set_major_formatter(IndexFormatter(data_df.index[::-1]))
    plt.xticks(rotation=50, horizontalalignment='center', rotation_mode='default')
    plt.show()


# Define a function for plotting two dataframes in different colours in the same plot
def plot_2it(data_df1, data_df2, label1='', label2='', title=''):
    """Plot the stock data stored in data_df1 and data_df2 in different colors. 
        label1 and label2 are the label names while title is the plot title"""
    plt.figure(figsize=(10, 5))
    pl = data_df1.loc[::-1].plot(fontsize=12, figsize=(13, 5), label=label1, color='green') 
    plt.plot([None for i in data_df1.loc[::-1]] + [x for x in data_df2.loc[::-1]], label=label2, color='royalblue')
    
    pl.set_title(label=title, fontsize=20)
    pl.set_xlabel('Date', fontsize=15) 
    plt.autoscale(enable=True, axis='x', tight=True)    
    pl.set_ylabel('Price', fontsize=15)
    plt.legend(fontsize=12, loc='upper left')
    plt.grid(axis='both', alpha=.5)
    pl.xaxis.set_major_locator(MaxNLocator(12))
    temp = pd.concat([data_df2, data_df1], axis=1)
    pl.xaxis.set_major_formatter(IndexFormatter(temp.index))
    plt.xticks(rotation=50, horizontalalignment='center', rotation_mode='default')
    plt.show()   

    
# Define a function for plotting two dataframes in different colours in the same plot
def plot_3it(data_df1, data_df2, data_df3, label1='', label2='', label3='', title=''):
    """Plot the stock data stored in data_df1 and data_df2 in different colors. 
        The entire dataset is stored in data_df3. 
        label1, label2 and label3 are the label names while title is the plot title"""
    plt.figure(figsize=(10, 5))
    pl = data_df1.loc[::-1].plot(fontsize=12, figsize=(13, 5), label=label1, color='green') 
    plt.plot([None for i in data_df1.loc[::-1]] + [x for x in data_df2.loc[::-1]], label=label2, color='royalblue')
    plt.plot(data_df3.loc[::-1], label=label3, color='darkorange')
    
    pl.set_title(label=title, fontsize=20)
    pl.set_xlabel('Date', fontsize=15) 
    plt.autoscale(enable=True, axis='x', tight=True)    
    pl.set_ylabel('Price', fontsize=15)
    plt.legend(fontsize=12, loc='upper left')
    plt.grid(axis='both', alpha=.5)
    pl.xaxis.set_major_locator(MaxNLocator(12))
    temp = pd.concat([data_df2, data_df1], axis=1)
    pl.xaxis.set_major_formatter(IndexFormatter(temp.index))
    plt.xticks(rotation=50, horizontalalignment='center', rotation_mode='default')
    plt.show()   
In [8]:
# Plot the stocks and compare every single one with the OMX Stockholm 30 index
count = 0
for stock in norm_stock_prices[1:]:
    temp_df = pd.concat([stock.loc[:, 'Adj Close'], norm_stock_prices[0].loc[:, 'Adj Close']],
                       keys=[get_ticker(stock), get_ticker(norm_stock_prices[0])], axis=1)
    fill_missing_values(temp_df)
    concat_df = temp_df.set_index(glob_index) # Set index equal to OMX's index   
    if concat_df.iloc[-1,0] != 1.0:   # If the dataframe is inverted, correct it.
        concat_df = concat_df[::-1]
        #concat_df = concat_df.reverse()   # reverse() can also be used
        concat_df.index = concat_df.index[::-1]
        
    if count <= 10:
        plot_it(concat_df, xlabel='Date', ylabel='Price')
    count += 1
    
<matplotlib.figure.Figure at 0x10ad77ef0>
<matplotlib.figure.Figure at 0x10ddb4940>
<matplotlib.figure.Figure at 0x10e0a66a0>
<matplotlib.figure.Figure at 0x10e238390>
<matplotlib.figure.Figure at 0x10e270b70>
<matplotlib.figure.Figure at 0x10e527748>
<matplotlib.figure.Figure at 0x10e56e208>
<matplotlib.figure.Figure at 0x10e5be208>
<matplotlib.figure.Figure at 0x10e615240>
<matplotlib.figure.Figure at 0x1156f9ef0>
<matplotlib.figure.Figure at 0x115871470>

They all seem to be correct. So, we have downloaded, imported, normalized, filled empty values and plotted data for 73 different stocks, OMX Stockholm 30 included, successfully.

MinMaxScale and Train-Test Split

MinMaxScale, do a train-test split and plot the resulting plot for the first 10 stocks. Different colors for train and test set.

In [9]:
# Create a copy to keep scaled and normalized data apart. [Have to use copy.deepcopy()]

scaled_LOG_stock_prices = copy.deepcopy(norm_stock_prices)
In [10]:
from sklearn.preprocessing import MinMaxScaler

##### FÖR ATT MAN SKA KUNNA ANVÄNDA fit_transform SÅ MÅSTE INPUTDATAN VARA MAX 2D


#We have to specify one scaler for each column. There are different min and max values in each column and
#the MinMaxScaler will therefore be tuned slightly different for each one of them. 
#It is needed to get the correct output later on. 
# Create a MinMaxScaler for each column in each stock and store it in the dictionary scaled_stock_prices.
# The name of the scaler is the stock ticker + column number.
# [0, 1, 2, 3, 4, 5, 6] <=> ['Open', 'High', 'Low', 'Close', 'Adj Close', 'Volume', 'Volatility']

#Specify one scaler for each column and stock
many_MinMaxScalers = {}
for i_s in range(len(norm_stock_prices)):
    for j_s in range(7):
        many_MinMaxScalers["{0}".format(get_ticker(norm_stock_prices[i_s])+str(j_s))] = MinMaxScaler(feature_range=(0,1))
In [11]:
print(len(many_MinMaxScalers))
511
In [12]:
def MMscale_data(data):
    """ A function for scaling the data in the dataframe data"""
    for i in range(len(data.columns)):
        data.iloc[:,i] = many_MinMaxScalers[get_ticker(data)+str(i)].fit_transform(data.iloc[:,i].values.reshape(-1,1))
    return data


def Un_scale_data(data, ticker=' '):
    """ A function for unscaling the data in the variable data.
        No ticker is needed if data is a dataframe while it is needed if data is an array. """
    if ticker == ' ' or isinstance(data, pd.DataFrame):  # if dataframe
        data.iloc[:,] = many_MinMaxScalers[get_ticker(data)+str(4)].inverse_transform(data.iloc[:,].values.reshape(-1,1))   
    else: # is an array
        data = many_MinMaxScalers[ticker+str(4)].inverse_transform(data.reshape(-1,1))
    return data


def Un_scale_data_whole(data):
    """ A function for unscaling the data in the dataframe data"""
    for c in range(len(data.columns)):
        data.iloc[:,c] = many_MinMaxScalers[get_ticker(data)+str(c)].inverse_transform(data.iloc[:,c].values.reshape(-1,1))
    return data
In [13]:
LOG_norm_train_list, LOG_norm_test_list = [], []
LOG_scaled_train_list, LOG_scaled_test_list = [], []


# Plot the first 10 stocks, OMX 30 excluded
for i in range(1, len(scaled_LOG_stock_prices)):
    test_size = int(len(scaled_LOG_stock_prices[i]) * 0.20)     # Specify the test size
    MMscale_data(scaled_LOG_stock_prices[i])
    
    # Scaled and normalized
    LOG_train, LOG_test = scaled_LOG_stock_prices[i][test_size:], scaled_LOG_stock_prices[i][0:test_size]
    # save each one into a list
    LOG_scaled_train_list.append(LOG_train)
    LOG_scaled_test_list.append(LOG_test) 
    
    # Normalized
    LOG_norm_train, LOG_norm_test = norm_stock_prices[i][test_size:], norm_stock_prices[i][0:test_size]
    LOG_norm_train_list.append(LOG_norm_train)
    LOG_norm_test_list.append(LOG_norm_test)   
    if i <= 10:
        plot_2it(LOG_norm_train.loc[:, 'Adj Close'], LOG_norm_test.loc[:, 'Adj Close'], 
                 'Training set', 'Test set', get_ticker(norm_stock_prices[i]))
    
    
print("Training samples: {0}".format(len(LOG_train)))
print("Testing samples: {0}".format(len(LOG_test)))
Training samples: 1042
Testing samples: 260

Make combination plots

In [14]:
for k in range(len(LOG_norm_train_list)):
    plot_3it(LOG_norm_train_list[k].loc[:, 'Adj Close'], LOG_norm_test_list[k].loc[:, 'Adj Close'], 
             norm_stock_prices[0].loc[:, 'Adj Close'], 'Training set', 'Test set', 'OMX30', 
             title = get_ticker(LOG_norm_train_list[k]))
        

Display the intersection between the training and test set

In [15]:
display(LOG_norm_test_list[0].iloc[-3:, :], LOG_norm_train_list[0].iloc[:3, :])
Open High Low Close Adj Close Volume Volatility
ACAN-B.ST__Date
2017-02-27 2.425373 2.214765 2.365672 2.161074 2.833405 8.054200 0.357333
2017-02-24 2.380597 2.167785 2.268657 2.154362 2.824606 11.125837 0.532079
2017-02-23 2.455224 2.208054 2.246269 2.140940 2.807007 10.949381 0.760284
Open High Low Close Adj Close Volume Volatility
ACAN-B.ST__Date
2017-02-22 2.477612 2.228188 2.432836 2.194631 2.877402 5.158152 0.161447
2017-02-21 2.470149 2.241611 2.462687 2.228188 2.921399 4.770522 0.107956
2017-02-20 2.462687 2.221476 2.432836 2.214765 2.903800 6.160910 0.135354

Logistic Regression Model

In [16]:
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, LSTM, LeakyReLU
from keras import optimizers
from keras import regularizers
np.set_printoptions(threshold=1000)


""" Define and compile a simple logistic regression model """
def logistic_regression_model(output_size, neurons, activ_func='relu', 
                              optimizer='adam', loss='mean_squared_error'):
    model = Sequential()
    model.add(Dense(output_size, activation=activ_func, input_shape=(7,)))
    #model.add(Dropout(dropout))   ### Bättre resultat utan Dropout (innan)

    model.compile(optimizer = optimizer, loss = loss)   #, metrics=['accuracy'])
    model.summary()
    return model
Using TensorFlow backend.

Scaling the data

Our training input data consist of 1037 rows and 7 columns while the training output consists of 1037 rows and one column ('Adjusted Close'). Likewise, the testing data consist of 258 rows and seven columns in the input and one as for the output. The reason to chose only one output column is that we are really only interested in predicting the closing adjusted price of the stock.

In [17]:
# Create the datasets. Training and testing inputs as well as outputs. 
LOG_train_inputs = copy.deepcopy(LOG_scaled_train_list[0][::-1][:-1])       # Remove the last day
LOG_train_outputs = copy.deepcopy(LOG_scaled_train_list[0][::-1].iloc[1:, :])      # Move 1 day ahead and choose 
                                                                            # Adjusted Close as only output
LOG_test_inputs = copy.deepcopy(LOG_scaled_test_list[0][::-1][:-1])
LOG_test_outputs = copy.deepcopy(LOG_scaled_test_list[0][::-1].iloc[1:, :])

print(LOG_train_inputs.shape)
print(LOG_train_outputs.shape)
print()
print(LOG_test_inputs.shape)
print(LOG_test_outputs.shape)
(1041, 7)
(1041, 7)

(259, 7)
(259, 7)
In [18]:
# Random seed for reproducibility
np.random.seed(45)

# Build the model architecture
LOG_model = logistic_regression_model(output_size=7, neurons=30)
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_1 (Dense)              (None, 7)                 56        
=================================================================
Total params: 56
Trainable params: 56
Non-trainable params: 0
_________________________________________________________________

Training the model

In [19]:
# Train the model
trained_model = LOG_model.fit(LOG_train_inputs, LOG_train_outputs, epochs=20, batch_size=1, 
                              verbose=2, shuffle=True, validation_split=0.05) 
Train on 988 samples, validate on 53 samples
Epoch 1/20
 - 2s - loss: 0.0159 - val_loss: 0.0995
Epoch 2/20
 - 1s - loss: 0.0105 - val_loss: 0.0958
Epoch 3/20
 - 1s - loss: 0.0103 - val_loss: 0.0961
Epoch 4/20
 - 2s - loss: 0.0102 - val_loss: 0.0945
Epoch 5/20
 - 2s - loss: 0.0101 - val_loss: 0.0947
Epoch 6/20
 - 2s - loss: 0.0101 - val_loss: 0.0942
Epoch 7/20
 - 2s - loss: 0.0100 - val_loss: 0.0938
Epoch 8/20
 - 2s - loss: 0.0100 - val_loss: 0.0947
Epoch 9/20
 - 2s - loss: 0.0100 - val_loss: 0.0940
Epoch 10/20
 - 2s - loss: 0.0100 - val_loss: 0.0950
Epoch 11/20
 - 2s - loss: 0.0100 - val_loss: 0.0957
Epoch 12/20
 - 2s - loss: 0.0100 - val_loss: 0.0948
Epoch 13/20
 - 2s - loss: 0.0100 - val_loss: 0.0944
Epoch 14/20
 - 2s - loss: 0.0100 - val_loss: 0.0943
Epoch 15/20
 - 1s - loss: 0.0100 - val_loss: 0.0948
Epoch 16/20
 - 2s - loss: 0.0100 - val_loss: 0.0942
Epoch 17/20
 - 1s - loss: 0.0100 - val_loss: 0.0944
Epoch 18/20
 - 2s - loss: 0.0100 - val_loss: 0.0940
Epoch 19/20
 - 2s - loss: 0.0100 - val_loss: 0.0936
Epoch 20/20
 - 2s - loss: 0.0100 - val_loss: 0.0953

Plot the training error

We would expect this to decrease over time.

In [20]:
def plot_error(model):
    """ Plot the error and some statistics. """
    fig, ax1 = plt.subplots(1,1, figsize=(10, 5))
    ax1.plot(model.epoch, model.history['loss'])
    ax1.set_title('Training Error')
    ax1.set_ylabel('Loss',fontsize=12)
    ax1.set_xlabel('# Epochs',fontsize=12)
    plt.show()

# Plot the error
plot_error(trained_model)

trainScore = LOG_model.evaluate(LOG_train_inputs, LOG_train_outputs, verbose=0)
testScore = LOG_model.evaluate(LOG_test_inputs, LOG_test_outputs, verbose=0)
print("Mean Squared Error on the training data: {0:0.5f}".format(trainScore))
print("Mean Squared Error on the test data:     {0:0.5f}".format(testScore))
Mean Squared Error on the training data: 0.01434
Mean Squared Error on the test data:     0.11133

Performance on the training and test sets

Now, check how our model performs on the training and test sets by plotting the real and predicted values and compare them. A zoomed in plot is also used. The idea is taken from this source: http://akuederle.com/matplotlib-zoomed-up-inset

In [21]:
from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes
from mpl_toolkits.axes_grid1.inset_locator import mark_inset

# A function for plotting a dataframe with a zoomed in plot
def plot_zoom(data_df, title='', xlabel='', ylabel=''):
    """Plot the stock stored in data_df, title is the plot title"""
    plt.figure(figsize=(10, 5))
    pl = data_df.loc[::-1].plot(fontsize=12, figsize=(13, 5)) # legend=None,
    pl.set_title(label=title, fontsize=20)
    pl.set_xlabel(xlabel, fontsize=15) 
    plt.autoscale(enable=True, axis='x', tight=True)    
    pl.set_ylabel(ylabel, fontsize=15)
    plt.legend(fontsize=12, loc='upper left')
    plt.grid(axis='both', alpha=.5)
    pl.xaxis.set_major_locator(MaxNLocator(12))
    pl.xaxis.set_major_formatter(IndexFormatter(data_df.index[::-1]))
    plt.xticks(rotation=50, horizontalalignment='center', rotation_mode='default')
    
    # The zoomed in window
    lg = int(len(data_df)*0.1)
    axins = zoomed_inset_axes(pl, 2.5, loc=9)
    axins.plot(data_df.loc[::-1])
    x1, x2 = data_df.index[lg,], data_df.index[0]        # specify the limits
    y1 = data_df.loc[data_df.index[0]:data_df.index[lg],'True Values'].min()
    y2 = data_df.loc[data_df.index[0]:data_df.index[lg],'True Values'].max()
    axins.set_xlim(x1, x2), axins.set_ylim(y1, y2)       # apply the x-limits, apply the y-limits
    axins.set_facecolor('whitesmoke')
    axins.axis[:].set_visible(False)                     # Remove the 4 borders
    mark_inset(pl, axins, loc1=2, loc2=4, fc="none", ec="1.5") # Add some lines for the zoom effect
    plt.show()
    
    
# A function for plotting three dataframes and a zoomed in plot
def plot_3zoom(data_df1, data_df2, data_df3, title='', xlabel='', ylabel='', zoom=True):
    """data_df1 contains the train set, data_df2 contains the test set and data_df3 contains the entire dataset. 
       title is the plot title. If a zoomed in window is desired, set zoom to True"""
    line_w, line_zoom = 1.0, 1.5    # line width for the main and zoomed plot
    # Plot the predicted train and test data
    diff = len(data_df3)-len(data_df2)-len(data_df1)
    pl = data_df1.plot(color='orchid', fontsize=12, figsize=(16, 7), label=data_df1.columns[0], linewidth=line_w)
    pred = np.empty_like(data_df3)
    pred[:, :] = np.nan
    pred[len(data_df1)+diff:len(data_df3), :] = data_df2
    plt.plot(pred, color='darkorange', label=data_df2.columns[0], linewidth=line_w)
    
    # Plot the actual values
    plt.plot(data_df3, color='green', label=data_df3.columns[0], linewidth=line_w) 
    
    pl.set_title(label=title, fontsize=20)
    pl.set_xlabel(xlabel, fontsize=15) 
    plt.autoscale(enable=True, axis='x', tight=True)    
    pl.set_ylabel(ylabel, fontsize=15)
    plt.legend(fontsize=12, loc='upper left')
    plt.grid(axis='both', alpha=.5)
    pl.xaxis.set_major_locator(MaxNLocator(12))
    pl.xaxis.set_major_formatter(IndexFormatter(data_df3.index[::-1]))
    plt.xticks(rotation=50, horizontalalignment='center', rotation_mode='default')
    
    if zoom:
        ## The zoomed in window
        lg = int(len(data_df3)*0.1)
        axins = zoomed_inset_axes(pl, 2.5, loc=9)
        axins.plot(data_df1.iloc[::-1], color='orchid', linewidth=line_zoom)
        axins.plot(pred, color='darkorange', label=data_df2.columns[0], linewidth=line_zoom)
        axins.plot(data_df3.iloc[::-1], color='green', linewidth=line_zoom)
        x1, x2 = data_df1[::-1].index[-lg//2,], data_df2[::-1].index[lg//2]    # specify the limits
        
        # Check for the max and min y values in the actual values, within the x limits.
        yA1 = data_df3.loc[data_df3.index[test_size-lg//2]:data_df3.index[test_size+lg//2],'Actual Data'].min()
        yA2 = data_df3.loc[data_df3.index[test_size-lg//2]:data_df3.index[test_size+lg//2],'Actual Data'].max()
        
        # Check for the max and min y values in the train set, within the x limits.
        yTr1 = data_df1.iloc[-lg//2:, 0].min()
        yTr2 = data_df1.iloc[-lg//2:, 0].max()
        
        # Check for the max and min y values in the test set, within the x limits.
        yTe1 = data_df2.iloc[:lg//2, 0].min()
        yTe2 = data_df2.iloc[:lg//2, 0].max()
        
        ys = [yA1, yA2, yTr1, yTr2, yTe1, yTe2]
        ymax, ymin = max(ys), min(ys)                       # find the max and min values among the different y's
        axins.set_xlim(x1, x2), axins.set_ylim(ymin, ymax)         # apply the x-limits, apply the y-limits
        axins.set_facecolor('whitesmoke')
        axins.axis[:].set_visible(False)                           # Remove the 4 borders
        mark_inset(pl, axins, loc1=2, loc2=4, fc="none", ec="1.5") # Add some lines for the zoom effect
    plt.show()
In [22]:
df1 = pd.DataFrame(data=(np.transpose(LOG_model.predict(LOG_norm_train_list[0][:-1].values)))[0], 
                     index=LOG_train_inputs.index, columns=['Predictions on the Train set'])
df2 = pd.DataFrame(data=(np.transpose(LOG_model.predict(LOG_norm_test_list[0][:-1].values)))[0],
                  index=LOG_test_inputs.index, columns=['Predictions on the Test set'])
df3 = pd.DataFrame(data=norm_stock_prices[1].loc[:, 'Adj Close'])
df3.columns = ['Actual Data']
name = get_ticker(LOG_norm_train_list[0])
plot_3zoom(df1[::-1], df2[::-1], df3, title='Logistic Regression Performance on the Training and Test Sets, ' + name, 
           xlabel='Date', ylabel='Price', zoom=False)

In the above figure, the actual values are plotted in green while the blue line represents the predicted values for the training set and the orange the predicted values for the test set. It seems to perform fairly poor on both the sets, predicting nothing more than the previous day's value.

Plot a couple of Logistic Regression predictions

In [32]:
def plot_some_LOG_models():
    # LOG_train_list and LOG_test_list contains the scaled stock values.
    global_time = time.time()
    nbr = 1
    for i in range(1, len(LOG_scaled_train_list)):
        print('===================')
        print('Plot: {0} (out of {1})'.format(nbr, len(LOG_scaled_train_list)-1))
        print('===================')
        
        LOG_train_inputs = copy.deepcopy(LOG_scaled_train_list[i][::-1][:-1])       
        LOG_train_outputs = copy.deepcopy(LOG_scaled_train_list[i][::-1].iloc[1:, :])
        LOG_test_inputs = copy.deepcopy(LOG_scaled_test_list[i][::-1][:-1])
        LOG_test_outputs = copy.deepcopy(LOG_scaled_test_list[i][::-1].iloc[1:, :])
    
        # Random seed for reproducibility
        np.random.seed(45)
        # Build the model architecture
        LOG_model = logistic_regression_model(output_size=7, neurons=30)
        # Train the model
        trained_model = LOG_model.fit(LOG_train_inputs, LOG_train_outputs, epochs=20, 
                                      batch_size=1, verbose=2, shuffle=True, validation_split=0.05) 
        
        trainScore = LOG_model.evaluate(LOG_train_inputs, LOG_train_outputs, verbose=0)
        testScore = LOG_model.evaluate(LOG_test_inputs, LOG_test_outputs, verbose=0)
        print("Mean Squared Error on the training data: {0:0.5f}".format(trainScore))
        print("Mean Squared Error on the test data:     {0:0.5f}".format(testScore))
    
        df1 = pd.DataFrame(data=(np.transpose(LOG_model.predict(LOG_norm_train_list[i][:-1].values)))[0], 
                     index=LOG_train_inputs.index, columns=['Predictions on the Train set'])
        df2 = pd.DataFrame(data=(np.transpose(LOG_model.predict(LOG_norm_test_list[i][:-1].values)))[0],
                  index=LOG_test_inputs.index, columns=['Predictions on the Test set'])
        df3 = pd.DataFrame(data=norm_stock_prices[i+1].loc[:, 'Adj Close'])
        df3.columns = ['Actual Data']
        name = get_ticker(LOG_norm_train_list[i])
        plot_3zoom(df1[::-1], df2[::-1], df3, 
                   title='Logistic Regression Performance on the Training and Test Sets, ' + name, 
                   xlabel='Date', ylabel='Price', zoom=False)
        nbr += 1
        print('======================================================================================================')
        
    print('======================================================================================================')
    print('Total run time in seconds: {0:0.0f}'.format(time.time()-global_time))
    
In [1]:
#plot_some_LOG_models()

Long Short Term Memory (LSTM)

In [34]:
# Define the LSTM model
def LSTM_model(inputs, output_size, neurons, activ_func="linear",
                dropout=0.5, loss="mean_squared_error", optimizer="adam"):
    model = Sequential()
    
    model.add(LSTM(neurons, input_shape=(inputs.shape[1], inputs.shape[2])))
    #model.add(Activation('tanh'))
    model.add(Dropout(dropout))
    
    model.add(Dense(units=output_size))
    #model.add(LeakyReLU())
    model.add(Activation(activ_func))
    
    model.compile(loss=loss, optimizer=optimizer)
    
    model.summary()
    return model


# worse with sigmoid activation function
# tanh resulted in overfitting (better result on the training data and worse on the test data)
#roughly same training time as before

MinMaxScale the data

In [56]:
# Create a copy to keep scaled and normalized data apart. [Have to use copy.deepcopy()]

scaled_norm_stock_prices = copy.deepcopy(norm_stock_prices)
In [57]:
LSTM_train_list, LSTM_test_list = [], []    # Create lists to store the train and test dataframes
norm_train_list, norm_test_list = [], []

# Create train and test sets for the stocks
for stock_price in range(1, len(norm_stock_prices)):
    test_size = int(len(scaled_norm_stock_prices[stock_price]) * 0.20)
    
    # Normalized
    norm_train = norm_stock_prices[stock_price][test_size:]
    norm_test = norm_stock_prices[stock_price][0:test_size]
    norm_train_list.append(norm_train)
    norm_test_list.append(norm_test)  
    
    # Normalized and scaled data
    MMscale_data(scaled_norm_stock_prices[stock_price])
    LSTM_train = scaled_norm_stock_prices[stock_price][test_size:]
    LSTM_test = scaled_norm_stock_prices[stock_price][0:test_size]
    # save each one into a list
    LSTM_train_list.append(LSTM_train)
    LSTM_test_list.append(LSTM_test)  

print("Training samples: {0}".format(len(LSTM_train)))
print("Test samples:     {0}".format(len(LSTM_test)))
print(LSTM_train.shape)
Training samples: 1042
Test samples:     260
(1042, 7)

Predicting one day with the LSTM

In [58]:
# convert an array of values into a dataset matrix.
# window, is the number of previous time steps to use as input variables to predict the next time period
def create_LSTM_dataset(dataset, window=10):
    # dataset is an array
    dataX = [dataset[i:(i+window), :] for i in range(len(dataset)-window)]
    dataY = [dataset[j + window, 4] for j in range(len(dataset)-window)]
    return np.array(dataX), np.array(dataY)

# create_LSTM_dataset and create_LSTM_dataset2 produce the exact same results.

def create_LSTM_dataset2(dataset, window=10):
    # dataset is an array. window is the number of historical datapoints the predictions 
    # are based on while pred_len is the prediction length.
    dataX, dataY = [], []
    for i in range(len(dataset)-window):
        dataX.append(dataset[i:(i+window), :])
        dataY.append(dataset[i+window, 4])
    return np.array(dataX), np.array(dataY)


# (LSTM's apperently work best with time steps in the size of 200-400 steps. I'll opt for 200. )
In [59]:
# Check if the scaling went as expected. 
display(scaled_norm_stock_prices[10].head())
Open High Low Close Adj Close Volume Volatility
BIOT.ST__Date
2018-03-02 0.823461 0.818182 0.802834 0.799534 0.803186 0.005035 0.261649
2018-03-01 0.867596 0.862189 0.833530 0.825175 0.828360 0.014571 0.352919
2018-02-28 0.825784 0.883034 0.831169 0.874126 0.876419 0.027687 0.548972
2018-02-27 0.847851 0.844818 0.831169 0.827506 0.830648 0.011233 0.246076
2018-02-26 0.836237 0.847134 0.848878 0.846154 0.848956 0.008849 0.133475
In [60]:
display(norm_stock_prices[10].head())
Open High Low Close Adj Close Volume Volatility
BIOT.ST__Date
2018-03-02 9.8625 9.563637 9.5000 9.365854 11.622211 1.077271 1.176173
2018-03-01 10.3375 10.024242 9.8250 9.634146 11.955139 3.117635 1.586457
2018-02-28 9.8875 10.242424 9.8000 10.146341 12.590728 5.923744 2.467762
2018-02-27 10.1250 9.842424 9.8000 9.658536 11.985405 2.403273 1.106171
2018-02-26 10.0000 9.866667 9.9875 9.853659 12.227535 1.893361 0.600000

Create inputs for the LSTM model

Specify how many days our model will base its predictions on by changing the window parameter.

In [61]:
# The predictions are far more reliable when using the scaled input data rather than the unscaled for the LSTM model. 
# (train_scaled and test_scaled are far better than train and test). The difference in loss after 5 epochs is 
# 1/250th in favour for the scaled values. 


window=1

# get ticker
tick = get_ticker(LSTM_train_list[0])  

"""Create the datasets"""
LSTM_train_input, LSTM_train_output = create_LSTM_dataset(LSTM_train_list[0].values, window)
LSTM_test_input, LSTM_test_output = create_LSTM_dataset(LSTM_test_list[0].values, window)

'''reshape the input to be [samples, time steps, features]'''
LSTM_test_input = np.reshape(LSTM_test_input, (LSTM_test_input.shape[0], LSTM_test_input.shape[1], 7))
LSTM_train_input = np.reshape(LSTM_train_input, (LSTM_train_input.shape[0], LSTM_train_input.shape[1], 7))


print(LSTM_train_input.shape)
print(LSTM_train_output.shape)
print('-----------')
print(LSTM_test_input.shape)
print(LSTM_test_output.shape)


# Check whether two arrays are equal
#print(np.array_equal(LSTM_train_input, testx))
#print(np.array_equal(LSTM_train_output, testy))
(1041, 1, 7)
(1041,)
-----------
(259, 1, 7)
(259,)
In [62]:
# Random seed for reproducibility
np.random.seed(2)

# Create the model
model = LSTM_model(LSTM_train_input, output_size = 1, neurons=20)
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_3 (LSTM)                (None, 20)                2240      
_________________________________________________________________
dropout_3 (Dropout)          (None, 20)                0         
_________________________________________________________________
dense_75 (Dense)             (None, 1)                 21        
_________________________________________________________________
activation_3 (Activation)    (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________

Train the LSTM model

In [63]:
trained_LSTM = model.fit(LSTM_train_input, LSTM_train_output, epochs=20, 
                         batch_size=1, verbose=1, shuffle=True, validation_split=0.05)
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 10s 10ms/step - loss: 0.0123 - val_loss: 0.0036
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0046 - val_loss: 0.0020
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0034 - val_loss: 2.5097e-04
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0030 - val_loss: 0.0028
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0025 - val_loss: 0.0012
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0024 - val_loss: 0.0038
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0035 - val_loss: 0.0035
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0025 - val_loss: 0.0034
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0024 - val_loss: 0.0016
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0025 - val_loss: 0.0029
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0025 - val_loss: 0.0015
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0020 - val_loss: 0.0013
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0026 - val_loss: 0.0016
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0023 - val_loss: 0.0015
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0023 - val_loss: 0.0011
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0022 - val_loss: 0.0010
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0022 - val_loss: 9.3101e-04
Epoch 18/20
988/988 [==============================] - 7s 7ms/step - loss: 0.0022 - val_loss: 0.0014
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0022 - val_loss: 6.4651e-04
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0023 - val_loss: 9.2262e-04

Plot the Training error

In [64]:
plot_error(trained_LSTM)

trainScore = model.evaluate(LSTM_train_input, LSTM_train_output, verbose=0)
testScore = model.evaluate(LSTM_test_input, LSTM_test_output, verbose=0)
print("Mean Squared Error on the training data: {0:0.6f}".format(trainScore))
print("Mean Squared Error on the test data:     {0:0.6f}".format(testScore))
Mean Squared Error on the training data: 0.000263
Mean Squared Error on the test data:     0.000362

Performance on the training and test sets

In [65]:
# A function for plotting three dataframes and a zoomed in plot
def plot_LSTM(data_df1, data_df2, data_df3, window_length, title='', xlabel='', ylabel='', zoom=True):
    """data_df1 contains the train set, data_df2 contains the test set and data_df3 contains the entire dataset. 
       title is the plot title. If a zoomed in window is desired, set zoom to True"""
    line_w, line_zoom = 1.0, 1.5
    plt.figure(figsize=(10, 5))
    # Plot the predicted train and test data
    pl = data_df1.plot(color='orchid', fontsize=12, figsize=(16, 7), label=data_df1.columns[0], linewidth=line_w)
    diff = len(data_df3)-len(data_df2)-len(data_df1)
    pred = np.empty_like(data_df3)
    pred[:, :] = np.nan
    pred[len(data_df1)+diff:len(data_df3), :] = data_df2
    ###pred[len(data_df1)+window_length+1:len(data_df3), :] = data_df2
    plt.plot(pred, color='darkorange', label=data_df2.columns[0], linewidth=line_w)
    
    # Plot the actual values
    plt.plot(data_df3, color='green', label=data_df3.columns[0], linewidth=line_w) 
    
    pl.set_title(label=title, fontsize=20)
    pl.set_xlabel(xlabel, fontsize=15) 
    plt.autoscale(enable=True, axis='x', tight=True)    
    pl.set_ylabel(ylabel, fontsize=15)
    plt.legend(fontsize=12, loc='upper left')
    plt.grid(axis='both', alpha=.5)
    pl.xaxis.set_major_locator(MaxNLocator(12))
    pl.xaxis.set_major_formatter(IndexFormatter(data_df3.index[::-1]))   
    plt.xticks(rotation=50, horizontalalignment='center', rotation_mode='default')
    
    if zoom:
        ## The zoomed in window
        lg = int(len(data_df3)*0.1)
        axins = zoomed_inset_axes(pl, 2, loc=9)
        axins.plot(data_df1, color='orchid', linewidth=line_zoom)
        axins.plot(pred, color='darkorange', label=data_df2.columns[0], linewidth=line_zoom)
        axins.plot(data_df3.loc[::-1], color='green', linewidth=line_zoom)
        x1, x2 = data_df1.index[-lg//2,], data_df2.index[lg]    # specify the limits
        
        # Check for the max and min y values in the actual values, within the x limits.
        yA1 = data_df3.loc[data_df3.index[test_size-lg]:data_df3.index[test_size+lg//2],'Actual Data'].min()
        yA2 = data_df3.loc[data_df3.index[test_size-lg]:data_df3.index[test_size+lg//2],'Actual Data'].max()
        
        # Check for the max and min y values in the train set, within the x limits.
        yTr1 = data_df1.iloc[-lg//2:, 0].min()
        yTr2 = data_df1.iloc[-lg//2:, 0].max()
        
        # Check for the max and min y values in the test set, within the x limits.
        yTe1 = data_df2.iloc[:lg, 0].min()
        yTe2 = data_df2.iloc[:lg, 0].max()
        
        ys = [yA1, yA2, yTr1, yTr2, yTe1, yTe2]
        ymax, ymin = max(ys), min(ys)                       # find the max and min values among the different y's
        axins.set_xlim(x1, x2), axins.set_ylim(ymin, ymax)         # apply the x-limits, apply the y-limits
        #plt.yticks(visible=False), plt.xticks(visible=False)      # Remove the tickers
        axins.set_facecolor('whitesmoke')
        axins.axis[:].set_visible(False)                           # Remove the 4 borders
        mark_inset(pl, axins, loc1=2, loc2=4, fc="none", ec="1.5") # Add some lines for the zoom effect
    plt.show()

Invert the scaling

In [66]:
# Prediction output har alltid 1 column (jag valde ju det när LSTM designades).

# Make predictions for the train set. Then invert the scaling.
LSTM_train_pred = model.predict(copy.deepcopy(LSTM_train_input))
LSTM_train_pred = Un_scale_data(copy.deepcopy(LSTM_train_pred), tick)
LSTM_train_output = Un_scale_data(copy.deepcopy(LSTM_train_output), tick)

# Make predictions for the test set. Then invert the scaling.
LSTM_test_pred = model.predict(copy.deepcopy(LSTM_test_input))
LSTM_test_pred = Un_scale_data(copy.deepcopy(LSTM_test_pred), tick)
LSTM_test_output = Un_scale_data(copy.deepcopy(LSTM_test_output), tick)
In [67]:
print(LSTM_train_pred.shape)
print(LSTM_train_output.shape)
print(LSTM_test_pred.shape)
print(LSTM_test_output.shape)
(1041, 1)
(1041, 1)
(259, 1)
(259, 1)
In [68]:
df1 = pd.DataFrame(data=LSTM_train_pred, index=LSTM_train_list[0].index[:-window], 
                   columns=['LSTM Predictions on Train set'])
df2 = pd.DataFrame(data=LSTM_test_pred, index=LSTM_test_list[0].index[:-window], 
                   columns=['LSTM Predictions on Test Set'])
df3 = pd.DataFrame(data=norm_stock_prices[1].loc[:, 'Adj Close'][:-window], index=glob_index[:-window])
df3.columns = ['Actual Data']

name = get_ticker(LSTM_train_list[0])
plot_LSTM(df1[::-1], df2[::-1], df3, window_length=window, 
          title= 'LSTM Single Day Performance on the Training and Test Sets, ' + name, 
          xlabel='Date', ylabel='Price', zoom=True)
<matplotlib.figure.Figure at 0x1a28a2a940>

Our LSTM seems to predict the changes in the stock price fairly well. Even very well on the training set actually. However, this isn't surprising given that it is the data that the model has been training on. More importantly is how it performs on the unseen test data (orange) where it still gives some quite acceptable predictions. There are a few misses but at large, the results are good.

In [69]:
# def many_LSTM_models(nbr_of_plots=3)
def many_LSTM_models():
    global_time = time.time()
    nbr = 1
    window=1
    for i in range(1, len(LSTM_train_list)):
        print('===================')
        print('Plot: {0} (out of {1})'.format(nbr, len(LSTM_train_list)-1))
        print('===================')
        
        LSTM_train_input, LSTM_train_output = create_LSTM_dataset(LSTM_train_list[i].values, window)
        LSTM_test_input, LSTM_test_output = create_LSTM_dataset(LSTM_test_list[i].values, window)
        '''reshape the input to be [samples, time steps, features]'''
        LSTM_test_input = np.reshape(LSTM_test_input, (LSTM_test_input.shape[0], LSTM_test_input.shape[1], 7))
        LSTM_train_input = np.reshape(LSTM_train_input, (LSTM_train_input.shape[0], LSTM_train_input.shape[1], 7))
        
        # Random seed for reproducibility
        np.random.seed(2)
        model = LSTM_model(LSTM_train_input, output_size = 1, neurons=20)
        trained_LSTM = model.fit(LSTM_train_input, LSTM_train_output, epochs=20, 
                                 batch_size=1, verbose=1, shuffle=True, validation_split=0.05)
        
        trainScore = model.evaluate(LSTM_train_input, LSTM_train_output, verbose=0)
        testScore = model.evaluate(LSTM_test_input, LSTM_test_output, verbose=0)
        print("Mean Squared Error on the training data: {0:0.6f}".format(trainScore))
        print("Mean Squared Error on the test data:     {0:0.6f}".format(testScore))
        
        tick = get_ticker(LSTM_train_list[i])
        # Make predictions for the train set. Then invert the scaling.
        LSTM_train_pred = model.predict(copy.deepcopy(LSTM_train_input))
        LSTM_train_pred = Un_scale_data(copy.deepcopy(LSTM_train_pred), tick)
        LSTM_train_output = Un_scale_data(copy.deepcopy(LSTM_train_output), tick)

        # Make predictions for the test set. Then invert the scaling.
        LSTM_test_pred = model.predict(copy.deepcopy(LSTM_test_input))
        LSTM_test_pred = Un_scale_data(copy.deepcopy(LSTM_test_pred), tick)
        LSTM_test_output = Un_scale_data(copy.deepcopy(LSTM_test_output), tick)
        
        df1 = pd.DataFrame(data=LSTM_train_pred, index=LSTM_train_list[i].index[:-window], 
                           columns=['LSTM Predictions on Train set'])
        df2 = pd.DataFrame(data=LSTM_test_pred, index=LSTM_test_list[i].index[:-window], 
                           columns=['LSTM Predictions on Test Set'])
        df3 = pd.DataFrame(data=norm_stock_prices[1+i].loc[:, 'Adj Close'][:-window], index=glob_index[:-window])
        df3.columns = ['Actual Data']

        name = get_ticker(LSTM_train_list[i])
        plot_LSTM(df1[::-1], df2[::-1], df3, window_length=window, 
                  title= 'LSTM Single Day Performance on the Training and Test Sets, ' + name, 
                  xlabel='Date', ylabel='Price', zoom=True)
        nbr += 1
        print('======================================================================================================')
    print('======================================================================================================')
    print('Total run time in seconds: {0:0.0f}'.format(time.time()-global_time))
    
    
   
    
In [70]:
many_LSTM_models()
===================
Plot: 1 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_4 (LSTM)                (None, 20)                2240      
_________________________________________________________________
dropout_4 (Dropout)          (None, 20)                0         
_________________________________________________________________
dense_76 (Dense)             (None, 1)                 21        
_________________________________________________________________
activation_4 (Activation)    (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0173 - val_loss: 0.0080
Epoch 2/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0061 - val_loss: 0.0034
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0035 - val_loss: 0.0011
Epoch 4/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0023 - val_loss: 0.0021
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0018 - val_loss: 0.0019
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0013 - val_loss: 0.0010
Epoch 7/20
988/988 [==============================] - 10s 10ms/step - loss: 0.0015 - val_loss: 0.0013
Epoch 8/20
988/988 [==============================] - 10s 10ms/step - loss: 0.0013 - val_loss: 0.0010
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0012 - val_loss: 0.0017
Epoch 10/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0011 - val_loss: 8.9274e-04
Epoch 11/20
988/988 [==============================] - 9s 10ms/step - loss: 0.0011 - val_loss: 0.0013
Epoch 12/20
988/988 [==============================] - 9s 9ms/step - loss: 9.6202e-04 - val_loss: 2.9134e-04
Epoch 13/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0010 - val_loss: 5.8867e-04
Epoch 14/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0012 - val_loss: 6.8802e-05
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0011 - val_loss: 1.2133e-04
Epoch 16/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0012 - val_loss: 2.5872e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0011 - val_loss: 7.1422e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 9.8178e-04 - val_loss: 3.3705e-04
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0012 - val_loss: 3.9073e-05
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0011 - val_loss: 1.5551e-04
Mean Squared Error on the training data: 0.000438
Mean Squared Error on the test data:     0.000572
<matplotlib.figure.Figure at 0x1a28b98eb8>
======================================================================================================
===================
Plot: 2 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_5 (LSTM)                (None, 20)                2240      
_________________________________________________________________
dropout_5 (Dropout)          (None, 20)                0         
_________________________________________________________________
dense_77 (Dense)             (None, 1)                 21        
_________________________________________________________________
activation_5 (Activation)    (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0106 - val_loss: 0.0018
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0038 - val_loss: 0.0019
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0026 - val_loss: 0.0013
Epoch 4/20
988/988 [==============================] - 7s 7ms/step - loss: 0.0022 - val_loss: 0.0016
Epoch 5/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0020 - val_loss: 0.0016
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0018 - val_loss: 0.0018
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0020 - val_loss: 0.0019
Epoch 8/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0018 - val_loss: 9.7541e-04
Epoch 9/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0019 - val_loss: 0.0021
Epoch 10/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0017 - val_loss: 0.0019
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 0.0014
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0017 - val_loss: 6.2318e-04
Epoch 13/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0019 - val_loss: 0.0022
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0017 - val_loss: 3.1127e-04
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0017 - val_loss: 0.0012
Epoch 16/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0018 - val_loss: 0.0010
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0018 - val_loss: 0.0017
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 0.0018
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0018 - val_loss: 9.5391e-04
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0018 - val_loss: 8.2253e-04
Mean Squared Error on the training data: 0.000390
Mean Squared Error on the test data:     0.001102
<matplotlib.figure.Figure at 0x1a241710f0>
======================================================================================================
===================
Plot: 3 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_6 (LSTM)                (None, 20)                2240      
_________________________________________________________________
dropout_6 (Dropout)          (None, 20)                0         
_________________________________________________________________
dense_78 (Dense)             (None, 1)                 21        
_________________________________________________________________
activation_6 (Activation)    (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 10s 10ms/step - loss: 0.0248 - val_loss: 3.8561e-04
Epoch 2/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0089 - val_loss: 0.0035
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0067 - val_loss: 0.0077
Epoch 4/20
988/988 [==============================] - 7s 7ms/step - loss: 0.0064 - val_loss: 6.0208e-04
Epoch 5/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0055 - val_loss: 5.5268e-04
Epoch 6/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0051 - val_loss: 0.0035
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0047 - val_loss: 0.0016
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0050 - val_loss: 5.3526e-04
Epoch 9/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0049 - val_loss: 9.3050e-04
Epoch 10/20
988/988 [==============================] - 7s 7ms/step - loss: 0.0042 - val_loss: 0.0023
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0045 - val_loss: 4.5319e-04
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0046 - val_loss: 6.3114e-04
Epoch 13/20
988/988 [==============================] - 7s 7ms/step - loss: 0.0050 - val_loss: 0.0011
Epoch 14/20
988/988 [==============================] - 7s 7ms/step - loss: 0.0048 - val_loss: 2.8373e-04
Epoch 15/20
988/988 [==============================] - 7s 7ms/step - loss: 0.0046 - val_loss: 3.1242e-04
Epoch 16/20
988/988 [==============================] - 7s 7ms/step - loss: 0.0052 - val_loss: 4.2650e-04
Epoch 17/20
988/988 [==============================] - 7s 7ms/step - loss: 0.0046 - val_loss: 9.3851e-04
Epoch 18/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0046 - val_loss: 0.0043
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0051 - val_loss: 7.0609e-04
Epoch 20/20
988/988 [==============================] - 7s 7ms/step - loss: 0.0047 - val_loss: 6.0975e-04
Mean Squared Error on the training data: 0.000329
Mean Squared Error on the test data:     0.000023
<matplotlib.figure.Figure at 0x1a26575f98>
======================================================================================================
===================
Plot: 4 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_7 (LSTM)                (None, 20)                2240      
_________________________________________________________________
dropout_7 (Dropout)          (None, 20)                0         
_________________________________________________________________
dense_79 (Dense)             (None, 1)                 21        
_________________________________________________________________
activation_7 (Activation)    (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 9s 10ms/step - loss: 0.0176 - val_loss: 0.0017
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0061 - val_loss: 8.4289e-04
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0047 - val_loss: 0.0011
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0037 - val_loss: 0.0013
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0032 - val_loss: 9.7324e-04
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0029 - val_loss: 0.0018
Epoch 7/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0035 - val_loss: 0.0013
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0030 - val_loss: 8.1249e-04
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0028 - val_loss: 7.2552e-04
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0028 - val_loss: 6.3708e-04
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0027 - val_loss: 9.7611e-04
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0026 - val_loss: 2.9521e-04
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0030 - val_loss: 4.1243e-04
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0029 - val_loss: 4.4691e-05
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0026 - val_loss: 3.3315e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0029 - val_loss: 6.9998e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0028 - val_loss: 0.0011
Epoch 18/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0026 - val_loss: 4.8239e-04
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0029 - val_loss: 8.0990e-05
Epoch 20/20
988/988 [==============================] - 7s 7ms/step - loss: 0.0029 - val_loss: 8.1306e-05
Mean Squared Error on the training data: 0.000163
Mean Squared Error on the test data:     0.000733
<matplotlib.figure.Figure at 0x1a22ce1e10>
======================================================================================================
===================
Plot: 5 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_8 (LSTM)                (None, 20)                2240      
_________________________________________________________________
dropout_8 (Dropout)          (None, 20)                0         
_________________________________________________________________
dense_80 (Dense)             (None, 1)                 21        
_________________________________________________________________
activation_8 (Activation)    (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 970 samples, validate on 52 samples
Epoch 1/20
970/970 [==============================] - 9s 10ms/step - loss: 0.0206 - val_loss: 0.0061
Epoch 2/20
970/970 [==============================] - 8s 8ms/step - loss: 0.0071 - val_loss: 0.0032
Epoch 3/20
970/970 [==============================] - 8s 8ms/step - loss: 0.0046 - val_loss: 0.0025
Epoch 4/20
970/970 [==============================] - 8s 8ms/step - loss: 0.0040 - val_loss: 0.0026
Epoch 5/20
970/970 [==============================] - 8s 8ms/step - loss: 0.0030 - val_loss: 0.0036
Epoch 6/20
970/970 [==============================] - 7s 8ms/step - loss: 0.0028 - val_loss: 0.0035
Epoch 7/20
970/970 [==============================] - 7s 8ms/step - loss: 0.0025 - val_loss: 6.6268e-04
Epoch 8/20
970/970 [==============================] - 8s 8ms/step - loss: 0.0024 - val_loss: 0.0014
Epoch 9/20
970/970 [==============================] - 8s 8ms/step - loss: 0.0022 - val_loss: 7.7294e-04
Epoch 10/20
970/970 [==============================] - 7s 8ms/step - loss: 0.0022 - val_loss: 0.0012
Epoch 11/20
970/970 [==============================] - 8s 8ms/step - loss: 0.0019 - val_loss: 3.5671e-04
Epoch 12/20
970/970 [==============================] - 8s 8ms/step - loss: 0.0021 - val_loss: 0.0010
Epoch 13/20
970/970 [==============================] - 8s 8ms/step - loss: 0.0022 - val_loss: 0.0020
Epoch 14/20
970/970 [==============================] - 8s 8ms/step - loss: 0.0021 - val_loss: 0.0023
Epoch 15/20
970/970 [==============================] - 7s 8ms/step - loss: 0.0023 - val_loss: 9.5148e-04
Epoch 16/20
970/970 [==============================] - 7s 8ms/step - loss: 0.0022 - val_loss: 4.3084e-04
Epoch 17/20
970/970 [==============================] - 8s 8ms/step - loss: 0.0022 - val_loss: 5.1209e-04
Epoch 18/20
970/970 [==============================] - 7s 8ms/step - loss: 0.0020 - val_loss: 2.0410e-04
Epoch 19/20
970/970 [==============================] - 7s 8ms/step - loss: 0.0024 - val_loss: 2.8747e-04
Epoch 20/20
970/970 [==============================] - 8s 8ms/step - loss: 0.0019 - val_loss: 8.8272e-05
Mean Squared Error on the training data: 0.000622
Mean Squared Error on the test data:     0.002976
<matplotlib.figure.Figure at 0x1a2309bcf8>
======================================================================================================
===================
Plot: 6 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_9 (LSTM)                (None, 20)                2240      
_________________________________________________________________
dropout_9 (Dropout)          (None, 20)                0         
_________________________________________________________________
dense_81 (Dense)             (None, 1)                 21        
_________________________________________________________________
activation_9 (Activation)    (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 10s 10ms/step - loss: 0.0186 - val_loss: 0.0076
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0066 - val_loss: 0.0026
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0042 - val_loss: 0.0011
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0032 - val_loss: 0.0026
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0025 - val_loss: 0.0020
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0021 - val_loss: 0.0019
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0023 - val_loss: 0.0021
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0018 - val_loss: 0.0017
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0018 - val_loss: 0.0017
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 9.7144e-04
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 0.0023
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 0.0012
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0018 - val_loss: 0.0016
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0017 - val_loss: 8.6529e-05
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 4.8385e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 8.9567e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 0.0021
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 7.6621e-04
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0017 - val_loss: 2.4726e-04
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 4.7376e-04
Mean Squared Error on the training data: 0.000318
Mean Squared Error on the test data:     0.000486
<matplotlib.figure.Figure at 0x1a23619a20>
======================================================================================================
===================
Plot: 7 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_10 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_10 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_82 (Dense)             (None, 1)                 21        
_________________________________________________________________
activation_10 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 10s 10ms/step - loss: 0.0084 - val_loss: 9.8595e-04
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0029 - val_loss: 0.0010
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0021 - val_loss: 4.3580e-04
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 5.1860e-04
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 4.4677e-04
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 0.0015
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 9.3429e-04
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 1.3617e-04
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0014 - val_loss: 9.0053e-04
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0013 - val_loss: 5.6827e-04
Epoch 11/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0013 - val_loss: 8.5457e-04
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0014 - val_loss: 3.0517e-04
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0014 - val_loss: 0.0010
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0014 - val_loss: 9.1668e-05
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0014 - val_loss: 3.4268e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 6.1139e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0014 - val_loss: 6.7147e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0013 - val_loss: 0.0011
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 2.2037e-04
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0014 - val_loss: 2.7520e-04
Mean Squared Error on the training data: 0.000121
Mean Squared Error on the test data:     0.001067
<matplotlib.figure.Figure at 0x1a2a9ec240>
======================================================================================================
===================
Plot: 8 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_11 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_11 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_83 (Dense)             (None, 1)                 21        
_________________________________________________________________
activation_11 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 10s 10ms/step - loss: 0.0152 - val_loss: 0.0022
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0058 - val_loss: 0.0021
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0036 - val_loss: 6.7741e-04
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0031 - val_loss: 0.0020
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0025 - val_loss: 9.3835e-04
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0023 - val_loss: 0.0018
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0027 - val_loss: 0.0033
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0022 - val_loss: 9.9982e-04
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0021 - val_loss: 0.0022
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0023 - val_loss: 0.0015
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0020 - val_loss: 0.0017
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0019 - val_loss: 9.7918e-04
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0023 - val_loss: 0.0017
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0020 - val_loss: 0.0012
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0023 - val_loss: 6.2736e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0020 - val_loss: 9.9169e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0021 - val_loss: 0.0019
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0019 - val_loss: 9.1391e-04
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0020 - val_loss: 9.3532e-04
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0022 - val_loss: 3.8215e-04
Mean Squared Error on the training data: 0.000315
Mean Squared Error on the test data:     0.000321
<matplotlib.figure.Figure at 0x1a22dba0b8>
======================================================================================================
===================
Plot: 9 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_12 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_12 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_84 (Dense)             (None, 1)                 21        
_________________________________________________________________
activation_12 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 10s 10ms/step - loss: 0.0043 - val_loss: 2.1286e-04
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 2.4256e-04
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0012 - val_loss: 7.0777e-05
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0010 - val_loss: 1.0159e-04
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 9.5975e-04 - val_loss: 4.0846e-05
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 9.5822e-04 - val_loss: 3.0768e-04
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0012 - val_loss: 4.5057e-04
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 9.9660e-04 - val_loss: 1.0029e-04
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 9.7776e-04 - val_loss: 1.5495e-04
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 9.3673e-04 - val_loss: 2.5964e-04
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 9.4666e-04 - val_loss: 1.8376e-04
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 9.1078e-04 - val_loss: 2.9664e-05
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0011 - val_loss: 1.8573e-04
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 9.5828e-04 - val_loss: 1.5913e-06
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 9.3376e-04 - val_loss: 2.0078e-05
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 9.8691e-04 - val_loss: 1.6069e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 9.0147e-04 - val_loss: 1.7975e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 9.1702e-04 - val_loss: 2.8946e-04
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 9.3144e-04 - val_loss: 3.4292e-05
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 9.9283e-04 - val_loss: 2.1248e-05
Mean Squared Error on the training data: 0.000023
Mean Squared Error on the test data:     0.001426
<matplotlib.figure.Figure at 0x1a2adafb38>
======================================================================================================
===================
Plot: 10 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_13 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_13 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_85 (Dense)             (None, 1)                 21        
_________________________________________________________________
activation_13 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 10s 10ms/step - loss: 0.0298 - val_loss: 0.0060
Epoch 2/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0109 - val_loss: 0.0037
Epoch 3/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0071 - val_loss: 0.0026
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0065 - val_loss: 0.0033
Epoch 5/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0048 - val_loss: 0.0028
Epoch 6/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0044 - val_loss: 0.0024
Epoch 7/20
988/988 [==============================] - 7s 7ms/step - loss: 0.0038 - val_loss: 0.0011
Epoch 8/20
988/988 [==============================] - 7s 7ms/step - loss: 0.0035 - val_loss: 5.0461e-04
Epoch 9/20
988/988 [==============================] - 7s 7ms/step - loss: 0.0036 - val_loss: 7.8321e-04
Epoch 10/20
988/988 [==============================] - 7s 7ms/step - loss: 0.0034 - val_loss: 0.0014
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0028 - val_loss: 0.0014
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0030 - val_loss: 2.0138e-04
Epoch 13/20
988/988 [==============================] - 7s 7ms/step - loss: 0.0033 - val_loss: 0.0018
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0030 - val_loss: 2.9658e-04
Epoch 15/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0032 - val_loss: 3.7849e-04
Epoch 16/20
988/988 [==============================] - 7s 7ms/step - loss: 0.0035 - val_loss: 4.2919e-04
Epoch 17/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0031 - val_loss: 3.6619e-04
Epoch 18/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0029 - val_loss: 9.8792e-04
Epoch 19/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0034 - val_loss: 1.2758e-04
Epoch 20/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0031 - val_loss: 1.8831e-04
Mean Squared Error on the training data: 0.000439
Mean Squared Error on the test data:     0.001351
<matplotlib.figure.Figure at 0x1a2d675c50>
======================================================================================================
===================
Plot: 11 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_14 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_14 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_86 (Dense)             (None, 1)                 21        
_________________________________________________________________
activation_14 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 10s 11ms/step - loss: 0.0153 - val_loss: 0.0038
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0058 - val_loss: 0.0021
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0038 - val_loss: 8.2384e-04
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0030 - val_loss: 0.0011
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0023 - val_loss: 7.0643e-04
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0020 - val_loss: 8.9040e-04
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0021 - val_loss: 8.1083e-04
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0019 - val_loss: 5.2497e-04
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0020 - val_loss: 0.0014
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0018 - val_loss: 0.0010
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 0.0012
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0019 - val_loss: 4.9898e-04
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0019 - val_loss: 8.3844e-04
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0019 - val_loss: 3.1700e-05
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0018 - val_loss: 9.1189e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0019 - val_loss: 3.7307e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0018 - val_loss: 9.2709e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0017 - val_loss: 0.0016
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0020 - val_loss: 1.7354e-05
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0018 - val_loss: 3.2770e-04
Mean Squared Error on the training data: 0.000232
Mean Squared Error on the test data:     0.001934
<matplotlib.figure.Figure at 0x1a2d8495f8>
======================================================================================================
===================
Plot: 12 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_15 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_15 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_87 (Dense)             (None, 1)                 21        
_________________________________________________________________
activation_15 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 10s 10ms/step - loss: 0.0198 - val_loss: 5.2802e-04
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0072 - val_loss: 3.5117e-04
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0059 - val_loss: 3.8522e-04
Epoch 4/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0048 - val_loss: 5.1545e-04
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0043 - val_loss: 2.4167e-04
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0039 - val_loss: 0.0011
Epoch 7/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0046 - val_loss: 0.0011
Epoch 8/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0043 - val_loss: 4.0898e-04
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0040 - val_loss: 4.6525e-04
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0039 - val_loss: 8.1330e-04
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0039 - val_loss: 5.0107e-04
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0038 - val_loss: 1.6710e-04
Epoch 13/20
988/988 [==============================] - 7s 7ms/step - loss: 0.0044 - val_loss: 8.3062e-04
Epoch 14/20
988/988 [==============================] - 7s 7ms/step - loss: 0.0039 - val_loss: 1.1544e-05
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0039 - val_loss: 3.2950e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0042 - val_loss: 4.0307e-04
Epoch 17/20
988/988 [==============================] - 7s 7ms/step - loss: 0.0040 - val_loss: 6.5291e-04
Epoch 18/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0039 - val_loss: 0.0010
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0039 - val_loss: 1.5960e-04
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0039 - val_loss: 9.7215e-05
Mean Squared Error on the training data: 0.000145
Mean Squared Error on the test data:     0.000205
<matplotlib.figure.Figure at 0x1a2c5c4eb8>
======================================================================================================
===================
Plot: 13 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_16 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_16 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_88 (Dense)             (None, 1)                 21        
_________________________________________________________________
activation_16 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 10s 11ms/step - loss: 0.0123 - val_loss: 4.1847e-04
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0043 - val_loss: 5.3280e-04
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0033 - val_loss: 1.8085e-04
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0031 - val_loss: 3.3848e-04
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0025 - val_loss: 1.9170e-04
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0026 - val_loss: 0.0017
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0030 - val_loss: 4.3719e-04
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0024 - val_loss: 3.3148e-04
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0024 - val_loss: 4.6092e-04
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0023 - val_loss: 3.9583e-04
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0025 - val_loss: 9.2253e-04
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0022 - val_loss: 1.7449e-04
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0026 - val_loss: 5.2079e-04
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0025 - val_loss: 2.3214e-04
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0025 - val_loss: 3.0661e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0025 - val_loss: 3.4379e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0023 - val_loss: 5.1219e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0024 - val_loss: 5.5015e-04
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0024 - val_loss: 2.0395e-04
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0025 - val_loss: 1.8069e-04
Mean Squared Error on the training data: 0.000350
Mean Squared Error on the test data:     0.001967
<matplotlib.figure.Figure at 0x1a2b8f4908>
======================================================================================================
===================
Plot: 14 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_17 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_17 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_89 (Dense)             (None, 1)                 21        
_________________________________________________________________
activation_17 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 10s 10ms/step - loss: 0.0316 - val_loss: 5.7291e-04
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0112 - val_loss: 4.7972e-04
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0088 - val_loss: 4.0691e-04
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0072 - val_loss: 4.6814e-04
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0066 - val_loss: 4.0611e-04
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0059 - val_loss: 0.0011
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0061 - val_loss: 5.6361e-04
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0056 - val_loss: 2.4478e-04
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0054 - val_loss: 4.1575e-04
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0050 - val_loss: 7.1160e-04
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0048 - val_loss: 7.7626e-04
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0051 - val_loss: 9.2504e-05
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0055 - val_loss: 5.1708e-04
Epoch 14/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0052 - val_loss: 9.5147e-05
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0051 - val_loss: 2.6652e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0055 - val_loss: 3.4265e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0051 - val_loss: 7.0956e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0050 - val_loss: 0.0011
Epoch 19/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0056 - val_loss: 6.1351e-05
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0053 - val_loss: 9.8370e-05
Mean Squared Error on the training data: 0.000201
Mean Squared Error on the test data:     0.000312
<matplotlib.figure.Figure at 0x1a2da3c630>
======================================================================================================
===================
Plot: 15 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_18 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_18 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_90 (Dense)             (None, 1)                 21        
_________________________________________________________________
activation_18 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 11s 11ms/step - loss: 0.0190 - val_loss: 0.0064
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0074 - val_loss: 0.0025
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0046 - val_loss: 0.0020
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0033 - val_loss: 0.0028
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0024 - val_loss: 0.0026
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0019 - val_loss: 0.0016
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0021 - val_loss: 0.0021
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0018 - val_loss: 9.8257e-04
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0018 - val_loss: 0.0020
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0017 - val_loss: 0.0011
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 0.0013
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0017 - val_loss: 5.2464e-04
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 7.7045e-04
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0017 - val_loss: 1.1702e-04
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 3.0714e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0018 - val_loss: 3.8267e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 3.7978e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 0.0011
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0017 - val_loss: 8.6563e-05
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 2.3428e-04
Mean Squared Error on the training data: 0.000235
Mean Squared Error on the test data:     0.000615
<matplotlib.figure.Figure at 0x1a2419e0f0>
======================================================================================================
===================
Plot: 16 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_19 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_19 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_91 (Dense)             (None, 1)                 21        
_________________________________________________________________
activation_19 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 10s 11ms/step - loss: 0.0206 - val_loss: 0.0013
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0068 - val_loss: 0.0010
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0047 - val_loss: 0.0016
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0042 - val_loss: 7.4388e-04
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0033 - val_loss: 6.6603e-04
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0031 - val_loss: 0.0011
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0031 - val_loss: 7.0530e-04
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0027 - val_loss: 7.5956e-04
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0029 - val_loss: 6.6347e-04
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0026 - val_loss: 8.1316e-04
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0027 - val_loss: 6.1255e-04
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0030 - val_loss: 5.6769e-04
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0027 - val_loss: 5.8304e-04
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0029 - val_loss: 5.4063e-04
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0025 - val_loss: 5.3943e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0029 - val_loss: 7.6002e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0026 - val_loss: 8.9820e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0026 - val_loss: 5.8706e-04
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0025 - val_loss: 8.8506e-04
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0025 - val_loss: 5.1047e-04
Mean Squared Error on the training data: 0.000753
Mean Squared Error on the test data:     0.000521
<matplotlib.figure.Figure at 0x1a2fca4588>
======================================================================================================
===================
Plot: 17 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_20 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_20 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_92 (Dense)             (None, 1)                 21        
_________________________________________________________________
activation_20 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 11s 11ms/step - loss: 0.0054 - val_loss: 1.6819e-04
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0020 - val_loss: 2.7622e-04
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 6.1732e-05
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0013 - val_loss: 3.9002e-05
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0012 - val_loss: 5.9999e-05
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0010 - val_loss: 4.3977e-04
Epoch 7/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0015 - val_loss: 3.3729e-04
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0012 - val_loss: 9.4768e-05
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0012 - val_loss: 2.1683e-04
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0011 - val_loss: 3.9384e-04
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0011 - val_loss: 1.8586e-04
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0011 - val_loss: 3.9859e-05
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0013 - val_loss: 1.2314e-04
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0012 - val_loss: 3.8425e-05
Epoch 15/20
988/988 [==============================] - 7s 7ms/step - loss: 0.0012 - val_loss: 1.6314e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0012 - val_loss: 6.3229e-05
Epoch 17/20
988/988 [==============================] - 7s 8ms/step - loss: 0.0012 - val_loss: 2.5860e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0012 - val_loss: 3.3025e-04
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0012 - val_loss: 2.1618e-05
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0012 - val_loss: 1.6292e-05
Mean Squared Error on the training data: 0.000056
Mean Squared Error on the test data:     0.001001
<matplotlib.figure.Figure at 0x1a2e3abe80>
======================================================================================================
===================
Plot: 18 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_21 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_21 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_93 (Dense)             (None, 1)                 21        
_________________________________________________________________
activation_21 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 11s 11ms/step - loss: 0.0159 - val_loss: 0.0058
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0055 - val_loss: 0.0033
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0036 - val_loss: 0.0031
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0024 - val_loss: 0.0028
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0020 - val_loss: 0.0032
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 0.0026
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0017 - val_loss: 0.0034
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 0.0017
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 0.0023
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0014 - val_loss: 0.0015
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0014 - val_loss: 0.0016
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 6.2318e-04
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 0.0017
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 9.7032e-05
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0013 - val_loss: 9.7417e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 0.0016
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 8.3538e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0014 - val_loss: 0.0017
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 1.9448e-04
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0014 - val_loss: 4.2910e-04
Mean Squared Error on the training data: 0.000146
Mean Squared Error on the test data:     0.000867
<matplotlib.figure.Figure at 0x1a2f82ae48>
======================================================================================================
===================
Plot: 19 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_22 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_22 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_94 (Dense)             (None, 1)                 21        
_________________________________________________________________
activation_22 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 13s 13ms/step - loss: 0.0284 - val_loss: 0.0053
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0108 - val_loss: 0.0031
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0078 - val_loss: 0.0019
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0060 - val_loss: 0.0022
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0050 - val_loss: 0.0014
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0043 - val_loss: 0.0012
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0040 - val_loss: 7.7786e-04
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0036 - val_loss: 4.6569e-04
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0037 - val_loss: 0.0014
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0031 - val_loss: 0.0011
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0028 - val_loss: 0.0013
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0033 - val_loss: 4.5765e-04
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0032 - val_loss: 8.9516e-04
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0034 - val_loss: 3.0342e-04
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0029 - val_loss: 7.8919e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0035 - val_loss: 9.1871e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0028 - val_loss: 2.9975e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0030 - val_loss: 0.0015
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0035 - val_loss: 1.7316e-04
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0031 - val_loss: 4.8565e-04
Mean Squared Error on the training data: 0.000383
Mean Squared Error on the test data:     0.001565
<matplotlib.figure.Figure at 0x1a33e7ca90>
======================================================================================================
===================
Plot: 20 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_23 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_23 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_95 (Dense)             (None, 1)                 21        
_________________________________________________________________
activation_23 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 11s 11ms/step - loss: 0.0136 - val_loss: 0.0043
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0050 - val_loss: 0.0035
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0032 - val_loss: 0.0014
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0023 - val_loss: 0.0015
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0017 - val_loss: 0.0015
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 0.0023
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 0.0021
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 0.0011
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 0.0013
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0014 - val_loss: 0.0013
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0013 - val_loss: 0.0013
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0014 - val_loss: 6.7311e-04
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0014 - val_loss: 0.0010
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0014 - val_loss: 9.3370e-05
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0014 - val_loss: 0.0011
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 3.3006e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0014 - val_loss: 6.4003e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0014 - val_loss: 0.0019
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 1.7408e-04
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0014 - val_loss: 1.7967e-04
Mean Squared Error on the training data: 0.000235
Mean Squared Error on the test data:     0.002669
<matplotlib.figure.Figure at 0x1a34429780>
======================================================================================================
===================
Plot: 21 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_24 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_24 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_96 (Dense)             (None, 1)                 21        
_________________________________________________________________
activation_24 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 11s 11ms/step - loss: 0.0095 - val_loss: 0.0020
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0033 - val_loss: 8.9659e-04
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0019 - val_loss: 3.3423e-04
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0014 - val_loss: 6.1227e-04
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0011 - val_loss: 4.5081e-04
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 9.8667e-04 - val_loss: 8.0683e-04
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0013 - val_loss: 8.8246e-04
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 9.9684e-04 - val_loss: 4.5231e-04
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 9.4708e-04 - val_loss: 4.9645e-04
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 8.8732e-04 - val_loss: 5.3952e-04
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 8.6147e-04 - val_loss: 7.6970e-04
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 7.8155e-04 - val_loss: 2.7892e-04
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 9.4039e-04 - val_loss: 1.3398e-04
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 9.9453e-04 - val_loss: 3.8080e-05
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 9.4027e-04 - val_loss: 3.5810e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 9.5207e-04 - val_loss: 3.2422e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 9.5099e-04 - val_loss: 0.0013
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 9.3839e-04 - val_loss: 4.0027e-04
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0010 - val_loss: 3.3591e-05
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0010 - val_loss: 1.1104e-04
Mean Squared Error on the training data: 0.000087
Mean Squared Error on the test data:     0.004219
<matplotlib.figure.Figure at 0x1a2f2d8588>
======================================================================================================
===================
Plot: 22 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_25 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_25 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_97 (Dense)             (None, 1)                 21        
_________________________________________________________________
activation_25 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 12s 12ms/step - loss: 0.0218 - val_loss: 0.0063
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0093 - val_loss: 0.0034
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0063 - val_loss: 0.0020
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0048 - val_loss: 0.0019
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0037 - val_loss: 0.0022
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0031 - val_loss: 0.0023
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0031 - val_loss: 0.0012
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0028 - val_loss: 3.5525e-04
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0027 - val_loss: 0.0011
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0025 - val_loss: 0.0012
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0022 - val_loss: 0.0013
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0026 - val_loss: 7.0484e-04
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0024 - val_loss: 0.0017
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0026 - val_loss: 8.0862e-05
Epoch 15/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0022 - val_loss: 0.0015
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0026 - val_loss: 8.7741e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0024 - val_loss: 5.1340e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0022 - val_loss: 0.0020
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0028 - val_loss: 2.8898e-05
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0023 - val_loss: 1.2054e-04
Mean Squared Error on the training data: 0.000205
Mean Squared Error on the test data:     0.000865
<matplotlib.figure.Figure at 0x1a35cc8b38>
======================================================================================================
===================
Plot: 23 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_26 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_26 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_98 (Dense)             (None, 1)                 21        
_________________________________________________________________
activation_26 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 12s 12ms/step - loss: 0.0378 - val_loss: 0.0071
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0143 - val_loss: 0.0039
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0102 - val_loss: 0.0030
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0087 - val_loss: 0.0031
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0068 - val_loss: 0.0027
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0059 - val_loss: 0.0040
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0053 - val_loss: 0.0014
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0049 - val_loss: 8.2899e-04
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0049 - val_loss: 0.0020
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0044 - val_loss: 0.0025
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0040 - val_loss: 0.0034
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0046 - val_loss: 9.7632e-04
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0046 - val_loss: 0.0018
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0046 - val_loss: 8.0149e-05
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0043 - val_loss: 0.0015
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0047 - val_loss: 0.0011
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0042 - val_loss: 0.0013
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0041 - val_loss: 0.0031
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0050 - val_loss: 1.6083e-05
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0044 - val_loss: 6.6660e-04
Mean Squared Error on the training data: 0.000545
Mean Squared Error on the test data:     0.002764
<matplotlib.figure.Figure at 0x1a35e0b5c0>
======================================================================================================
===================
Plot: 24 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_27 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_27 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_99 (Dense)             (None, 1)                 21        
_________________________________________________________________
activation_27 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 11s 11ms/step - loss: 0.0192 - val_loss: 6.0421e-04
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0069 - val_loss: 3.3518e-04
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0055 - val_loss: 3.1391e-04
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0044 - val_loss: 7.6749e-04
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0040 - val_loss: 5.7897e-04
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0038 - val_loss: 9.0851e-04
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0044 - val_loss: 9.4467e-04
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0039 - val_loss: 3.7499e-04
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0038 - val_loss: 3.5762e-04
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0036 - val_loss: 6.3799e-04
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0036 - val_loss: 8.5159e-04
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0037 - val_loss: 1.6766e-04
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0042 - val_loss: 5.4202e-04
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0035 - val_loss: 7.3321e-06
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0036 - val_loss: 3.2310e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0040 - val_loss: 5.2037e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0037 - val_loss: 8.4264e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0036 - val_loss: 7.9197e-04
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0038 - val_loss: 1.1784e-04
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0038 - val_loss: 5.5676e-05
Mean Squared Error on the training data: 0.000118
Mean Squared Error on the test data:     0.000254
<matplotlib.figure.Figure at 0x1a326df710>
======================================================================================================
===================
Plot: 25 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_28 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_28 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_100 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_28 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 12s 12ms/step - loss: 0.0380 - val_loss: 4.9526e-04
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0132 - val_loss: 0.0040
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0096 - val_loss: 0.0066
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0088 - val_loss: 6.3152e-04
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0076 - val_loss: 1.1794e-04
Epoch 6/20
988/988 [==============================] - ETA: 0s - loss: 0.006 - 8s 8ms/step - loss: 0.0068 - val_loss: 0.0065
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0059 - val_loss: 7.4375e-04
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0059 - val_loss: 1.5185e-04
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0061 - val_loss: 5.4408e-04
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0056 - val_loss: 7.0852e-04
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0055 - val_loss: 9.6396e-05
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0055 - val_loss: 2.6681e-04
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0062 - val_loss: 0.0011
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0056 - val_loss: 1.2773e-04
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0057 - val_loss: 1.7894e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0065 - val_loss: 7.0387e-05
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0056 - val_loss: 1.7908e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0057 - val_loss: 0.0028
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0058 - val_loss: 1.2699e-04
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0057 - val_loss: 7.7679e-04
Mean Squared Error on the training data: 0.000629
Mean Squared Error on the test data:     0.000076
<matplotlib.figure.Figure at 0x1a2bbffcf8>
======================================================================================================
===================
Plot: 26 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_29 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_29 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_101 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_29 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 12s 12ms/step - loss: 0.0111 - val_loss: 0.0026
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0038 - val_loss: 0.0020
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0027 - val_loss: 0.0011
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0021 - val_loss: 0.0019
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0019 - val_loss: 9.6012e-04
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 0.0018
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0019 - val_loss: 0.0019
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0017 - val_loss: 0.0011
Epoch 9/20
988/988 [==============================] - ETA: 0s - loss: 0.001 - 8s 8ms/step - loss: 0.0016 - val_loss: 0.0018
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 0.0021
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 0.0016
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0014 - val_loss: 8.6996e-04
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 8.8621e-04
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 1.6331e-04
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 0.0011
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 8.8311e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 0.0018
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0014 - val_loss: 0.0013
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 5.7729e-04
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 6.0580e-04
Mean Squared Error on the training data: 0.000184
Mean Squared Error on the test data:     0.000368
<matplotlib.figure.Figure at 0x1a367a2f28>
======================================================================================================
===================
Plot: 27 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_30 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_30 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_102 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_30 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 12s 12ms/step - loss: 0.0230 - val_loss: 0.0026
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0085 - val_loss: 0.0012
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0063 - val_loss: 0.0013
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0044 - val_loss: 0.0019
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0040 - val_loss: 0.0016
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0034 - val_loss: 0.0015
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0036 - val_loss: 0.0011
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0034 - val_loss: 2.6533e-04
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0034 - val_loss: 6.7770e-04
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0030 - val_loss: 0.0014
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0029 - val_loss: 8.5145e-04
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0031 - val_loss: 5.6867e-04
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0031 - val_loss: 0.0016
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0030 - val_loss: 6.3568e-05
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0028 - val_loss: 9.9048e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0033 - val_loss: 4.1956e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0032 - val_loss: 7.4426e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0029 - val_loss: 0.0016
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0033 - val_loss: 6.8604e-05
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0031 - val_loss: 1.5977e-04
Mean Squared Error on the training data: 0.000344
Mean Squared Error on the test data:     0.005442
<matplotlib.figure.Figure at 0x1a38305d68>
======================================================================================================
===================
Plot: 28 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_31 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_31 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_103 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_31 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 12s 12ms/step - loss: 0.0017 - val_loss: 3.0394e-05
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 5.0814e-04 - val_loss: 3.8856e-05
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 4.3099e-04 - val_loss: 9.8824e-05
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 3.7903e-04 - val_loss: 1.7320e-05
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 3.4576e-04 - val_loss: 8.3463e-06
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 2.9030e-04 - val_loss: 2.2060e-05
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 5.4348e-04 - val_loss: 1.6809e-04
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 3.1934e-04 - val_loss: 8.1592e-06
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 3.1037e-04 - val_loss: 1.2992e-05
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 2.8699e-04 - val_loss: 1.7706e-05
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 3.8835e-04 - val_loss: 3.4156e-05
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 1.9904e-04 - val_loss: 2.5355e-05
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 3.7037e-04 - val_loss: 9.8392e-06
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 3.3785e-04 - val_loss: 3.4959e-05
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 3.1401e-04 - val_loss: 1.5265e-05
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 2.8457e-04 - val_loss: 8.3088e-06
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 3.0544e-04 - val_loss: 9.1383e-06
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 3.0707e-04 - val_loss: 1.7612e-05
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 3.1146e-04 - val_loss: 8.6506e-05
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 3.0697e-04 - val_loss: 7.4058e-05
Mean Squared Error on the training data: 0.000070
Mean Squared Error on the test data:     0.001422
<matplotlib.figure.Figure at 0x1a2f7bcb00>
======================================================================================================
===================
Plot: 29 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_32 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_32 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_104 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_32 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 12s 12ms/step - loss: 0.0244 - val_loss: 0.0110
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0093 - val_loss: 0.0054
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0061 - val_loss: 0.0031
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0042 - val_loss: 0.0046
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0030 - val_loss: 0.0034
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0023 - val_loss: 0.0027
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0022 - val_loss: 8.6332e-04
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0019 - val_loss: 0.0015
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0020 - val_loss: 0.0027
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0018 - val_loss: 0.0013
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 0.0025
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0017 - val_loss: 7.7460e-04
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0018 - val_loss: 0.0016
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0019 - val_loss: 6.6669e-04
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0017 - val_loss: 1.8586e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0018 - val_loss: 6.3972e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 2.1983e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0017 - val_loss: 6.3609e-04
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0018 - val_loss: 2.9387e-04
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 7.2953e-04
Mean Squared Error on the training data: 0.000518
Mean Squared Error on the test data:     0.001165
<matplotlib.figure.Figure at 0x1a35dd0f98>
======================================================================================================
===================
Plot: 30 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_33 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_33 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_105 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_33 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 52 samples
Epoch 1/20
988/988 [==============================] - 12s 12ms/step - loss: 0.0286 - val_loss: 0.0054
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0100 - val_loss: 0.0019
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0076 - val_loss: 0.0026
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0055 - val_loss: 0.0036
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0049 - val_loss: 0.0029
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0041 - val_loss: 0.0032
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0043 - val_loss: 0.0029
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0040 - val_loss: 0.0018
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0038 - val_loss: 0.0020
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0032 - val_loss: 0.0016
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0032 - val_loss: 0.0014
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0037 - val_loss: 7.9243e-04
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0036 - val_loss: 0.0014
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0036 - val_loss: 1.5864e-04
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0033 - val_loss: 9.1771e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0038 - val_loss: 0.0012
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0036 - val_loss: 0.0014
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0033 - val_loss: 0.0017
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0036 - val_loss: 3.2999e-04
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0036 - val_loss: 4.9581e-04
Mean Squared Error on the training data: 0.000402
Mean Squared Error on the test data:     0.001213
<matplotlib.figure.Figure at 0x1a3a5c1f28>
======================================================================================================
===================
Plot: 31 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_34 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_34 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_106 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_34 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 12s 12ms/step - loss: 0.0113 - val_loss: 0.0025
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0040 - val_loss: 0.0021
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0030 - val_loss: 0.0013
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0025 - val_loss: 0.0030
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0022 - val_loss: 0.0023
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0019 - val_loss: 0.0029
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0024 - val_loss: 0.0037
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0021 - val_loss: 0.0022
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0020 - val_loss: 0.0025
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0020 - val_loss: 0.0036
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0019 - val_loss: 0.0025
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0019 - val_loss: 0.0021
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0021 - val_loss: 0.0027
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0019 - val_loss: 0.0016
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0018 - val_loss: 0.0016
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0020 - val_loss: 0.0019
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0019 - val_loss: 0.0025
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0019 - val_loss: 0.0040
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0020 - val_loss: 0.0014
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0020 - val_loss: 0.0022
Mean Squared Error on the training data: 0.000451
Mean Squared Error on the test data:     0.000613
<matplotlib.figure.Figure at 0x1a3bf0d5c0>
======================================================================================================
===================
Plot: 32 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_35 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_35 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_107 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_35 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 12s 12ms/step - loss: 0.0310 - val_loss: 0.0043
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0123 - val_loss: 0.0052
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0086 - val_loss: 0.0028
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0072 - val_loss: 0.0034
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0053 - val_loss: 0.0033
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0046 - val_loss: 0.0035
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0049 - val_loss: 0.0028
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0044 - val_loss: 0.0014
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0040 - val_loss: 0.0014
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0041 - val_loss: 0.0010
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0036 - val_loss: 0.0020
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0039 - val_loss: 0.0012
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0039 - val_loss: 4.6578e-04
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0040 - val_loss: 2.5785e-05
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0040 - val_loss: 0.0015
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0041 - val_loss: 2.4360e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0040 - val_loss: 5.0744e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0037 - val_loss: 0.0015
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0040 - val_loss: 2.3657e-05
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0038 - val_loss: 2.9656e-04
Mean Squared Error on the training data: 0.000374
Mean Squared Error on the test data:     0.000526
<matplotlib.figure.Figure at 0x1a3aae1be0>
======================================================================================================
===================
Plot: 33 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_36 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_36 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_108 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_36 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 13s 13ms/step - loss: 0.0033 - val_loss: 1.1161e-04
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0011 - val_loss: 1.2854e-04
Epoch 3/20
988/988 [==============================] - 8s 9ms/step - loss: 8.6851e-04 - val_loss: 5.4578e-05
Epoch 4/20
988/988 [==============================] - 8s 9ms/step - loss: 7.3216e-04 - val_loss: 3.3959e-05
Epoch 5/20
988/988 [==============================] - 8s 9ms/step - loss: 6.7157e-04 - val_loss: 4.8503e-05
Epoch 6/20
988/988 [==============================] - 8s 9ms/step - loss: 7.9288e-04 - val_loss: 1.8560e-04
Epoch 7/20
988/988 [==============================] - ETA: 0s - loss: 8.0967e-0 - 8s 9ms/step - loss: 8.1042e-04 - val_loss: 2.9703e-04
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 7.4039e-04 - val_loss: 9.9249e-05
Epoch 9/20
988/988 [==============================] - 8s 9ms/step - loss: 6.6471e-04 - val_loss: 7.6054e-05
Epoch 10/20
988/988 [==============================] - 9s 9ms/step - loss: 6.3914e-04 - val_loss: 1.0015e-04
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 6.2926e-04 - val_loss: 1.0240e-04
Epoch 12/20
988/988 [==============================] - 9s 9ms/step - loss: 5.7318e-04 - val_loss: 4.7571e-05
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 7.7710e-04 - val_loss: 7.0800e-05
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 6.9907e-04 - val_loss: 5.6549e-05
Epoch 15/20
988/988 [==============================] - 8s 9ms/step - loss: 6.8570e-04 - val_loss: 5.6696e-06
Epoch 16/20
988/988 [==============================] - 8s 9ms/step - loss: 7.4234e-04 - val_loss: 7.6008e-05
Epoch 17/20
988/988 [==============================] - 8s 9ms/step - loss: 5.8195e-04 - val_loss: 2.1202e-04
Epoch 18/20
988/988 [==============================] - 8s 9ms/step - loss: 6.4145e-04 - val_loss: 1.5134e-04
Epoch 19/20
988/988 [==============================] - 8s 9ms/step - loss: 6.8017e-04 - val_loss: 3.6396e-06
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 6.8937e-04 - val_loss: 2.1779e-06
Mean Squared Error on the training data: 0.000030
Mean Squared Error on the test data:     0.001692
<matplotlib.figure.Figure at 0x1a345ef400>
======================================================================================================
===================
Plot: 34 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_37 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_37 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_109 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_37 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 13s 13ms/step - loss: 0.0052 - val_loss: 5.6875e-04
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0017 - val_loss: 5.9597e-04
Epoch 3/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0013 - val_loss: 2.3164e-04
Epoch 4/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0010 - val_loss: 2.3475e-04
Epoch 5/20
988/988 [==============================] - 8s 9ms/step - loss: 9.8079e-04 - val_loss: 1.4479e-04
Epoch 6/20
988/988 [==============================] - 8s 9ms/step - loss: 8.9039e-04 - val_loss: 7.3329e-04
Epoch 7/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0011 - val_loss: 6.4504e-04
Epoch 8/20
988/988 [==============================] - 8s 9ms/step - loss: 9.6983e-04 - val_loss: 2.6106e-04
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 9.0176e-04 - val_loss: 6.6733e-04
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 8.5451e-04 - val_loss: 4.3412e-04
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 8.9250e-04 - val_loss: 5.7180e-04
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 8.0470e-04 - val_loss: 1.8969e-04
Epoch 13/20
988/988 [==============================] - 8s 9ms/step - loss: 9.4486e-04 - val_loss: 3.3338e-04
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 9.3396e-04 - val_loss: 1.7562e-05
Epoch 15/20
988/988 [==============================] - 8s 9ms/step - loss: 9.3301e-04 - val_loss: 2.7040e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 8.9903e-04 - val_loss: 2.9880e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 9.0065e-04 - val_loss: 7.3809e-04
Epoch 18/20
988/988 [==============================] - 8s 9ms/step - loss: 8.6304e-04 - val_loss: 4.8506e-04
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 9.4175e-04 - val_loss: 1.7780e-04
Epoch 20/20
988/988 [==============================] - 8s 9ms/step - loss: 9.7790e-04 - val_loss: 1.2687e-04
Mean Squared Error on the training data: 0.000099
Mean Squared Error on the test data:     0.003106
<matplotlib.figure.Figure at 0x1a382245c0>
======================================================================================================
===================
Plot: 35 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_38 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_38 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_110 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_38 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 13s 13ms/step - loss: 0.0268 - val_loss: 9.7763e-04
Epoch 2/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0093 - val_loss: 4.4244e-04
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0075 - val_loss: 7.2900e-04
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0060 - val_loss: 0.0013
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0056 - val_loss: 7.4297e-04
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0050 - val_loss: 0.0018
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0054 - val_loss: 0.0012
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0050 - val_loss: 4.2836e-04
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0049 - val_loss: 9.8804e-04
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0045 - val_loss: 0.0011
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0044 - val_loss: 0.0010
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0048 - val_loss: 2.7311e-04
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0052 - val_loss: 0.0010
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0049 - val_loss: 4.1085e-05
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0046 - val_loss: 3.8673e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0050 - val_loss: 6.5581e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0045 - val_loss: 9.1775e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0044 - val_loss: 0.0013
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0049 - val_loss: 3.3399e-04
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0049 - val_loss: 1.8081e-04
Mean Squared Error on the training data: 0.000158
Mean Squared Error on the test data:     0.000417
<matplotlib.figure.Figure at 0x1a3d4583c8>
======================================================================================================
===================
Plot: 36 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_39 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_39 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_111 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_39 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 13s 13ms/step - loss: 0.0209 - val_loss: 3.2755e-04
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0072 - val_loss: 5.1054e-04
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0059 - val_loss: 3.0694e-04
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0053 - val_loss: 5.8838e-04
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0048 - val_loss: 8.9792e-05
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0045 - val_loss: 0.0010
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0047 - val_loss: 3.5395e-04
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0046 - val_loss: 5.9282e-06
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0045 - val_loss: 3.9874e-04
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0043 - val_loss: 7.1054e-04
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0042 - val_loss: 6.7154e-04
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0045 - val_loss: 7.2731e-05
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0050 - val_loss: 4.0871e-04
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0044 - val_loss: 5.1338e-05
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0045 - val_loss: 2.6206e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0050 - val_loss: 1.6709e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0043 - val_loss: 1.6332e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0042 - val_loss: 8.1113e-04
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0048 - val_loss: 1.2735e-04
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0046 - val_loss: 6.2658e-05
Mean Squared Error on the training data: 0.000207
Mean Squared Error on the test data:     0.000261
<matplotlib.figure.Figure at 0x1a3c5f3a58>
======================================================================================================
===================
Plot: 37 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_40 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_40 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_112 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_40 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 13s 13ms/step - loss: 0.0097 - val_loss: 0.0018
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0036 - val_loss: 0.0011
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0029 - val_loss: 1.9603e-04
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0024 - val_loss: 0.0014
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0021 - val_loss: 8.4256e-04
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0020 - val_loss: 0.0012
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0027 - val_loss: 0.0018
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0023 - val_loss: 0.0019
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0022 - val_loss: 8.0624e-04
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0022 - val_loss: 0.0011
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0022 - val_loss: 0.0010
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0020 - val_loss: 8.8065e-04
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0026 - val_loss: 0.0014
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0021 - val_loss: 5.7065e-04
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0021 - val_loss: 6.0568e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0021 - val_loss: 6.6506e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0023 - val_loss: 8.2424e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0022 - val_loss: 0.0010
Epoch 19/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0019 - val_loss: 6.0804e-04
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0021 - val_loss: 4.3729e-04
Mean Squared Error on the training data: 0.000277
Mean Squared Error on the test data:     0.000358
<matplotlib.figure.Figure at 0x1a387b2208>
======================================================================================================
===================
Plot: 38 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_41 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_41 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_113 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_41 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 13s 14ms/step - loss: 0.0125 - val_loss: 0.0016
Epoch 2/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0044 - val_loss: 8.9273e-04
Epoch 3/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0028 - val_loss: 2.3528e-04
Epoch 4/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0021 - val_loss: 8.5317e-04
Epoch 5/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0017 - val_loss: 5.8436e-04
Epoch 6/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0016 - val_loss: 5.5162e-04
Epoch 7/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0017 - val_loss: 8.8124e-04
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 0.0011
Epoch 9/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0014 - val_loss: 4.4852e-04
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 8.2390e-04
Epoch 11/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0013 - val_loss: 6.7798e-04
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0012 - val_loss: 9.3572e-04
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0014 - val_loss: 3.5115e-04
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0014 - val_loss: 1.8641e-04
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 5.2626e-04
Epoch 16/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0013 - val_loss: 5.2543e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0013 - val_loss: 0.0012
Epoch 18/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0014 - val_loss: 3.5113e-04
Epoch 19/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0012 - val_loss: 2.5324e-04
Epoch 20/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0014 - val_loss: 2.5129e-04
Mean Squared Error on the training data: 0.000233
Mean Squared Error on the test data:     0.001574
<matplotlib.figure.Figure at 0x1a3820c390>
======================================================================================================
===================
Plot: 39 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_42 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_42 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_114 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_42 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 13s 13ms/step - loss: 0.0139 - val_loss: 5.8621e-04
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0050 - val_loss: 0.0010
Epoch 3/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0034 - val_loss: 4.1205e-04
Epoch 4/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0034 - val_loss: 6.0191e-04
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0028 - val_loss: 2.3745e-04
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0024 - val_loss: 0.0013
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0025 - val_loss: 2.3667e-04
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0026 - val_loss: 2.3046e-04
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0026 - val_loss: 3.6808e-04
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0023 - val_loss: 4.6814e-04
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0021 - val_loss: 2.5946e-04
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0024 - val_loss: 2.7747e-05
Epoch 13/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0027 - val_loss: 4.9824e-04
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0023 - val_loss: 2.2115e-05
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0023 - val_loss: 7.8495e-05
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0025 - val_loss: 5.5444e-05
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0024 - val_loss: 2.9717e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0024 - val_loss: 4.8361e-04
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0027 - val_loss: 1.6303e-05
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0024 - val_loss: 1.4820e-04
Mean Squared Error on the training data: 0.000185
Mean Squared Error on the test data:     0.000331
<matplotlib.figure.Figure at 0x1a2fb765c0>
======================================================================================================
===================
Plot: 40 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_43 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_43 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_115 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_43 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 13s 13ms/step - loss: 0.0033 - val_loss: 0.0011
Epoch 2/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0012 - val_loss: 8.7694e-04
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 9.2457e-04 - val_loss: 6.9744e-05
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 8.2101e-04 - val_loss: 6.3133e-04
Epoch 5/20
988/988 [==============================] - 8s 9ms/step - loss: 7.0606e-04 - val_loss: 5.4205e-04
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 6.6377e-04 - val_loss: 3.4190e-04
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 8.4208e-04 - val_loss: 7.4894e-04
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 6.9722e-04 - val_loss: 9.4457e-04
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 6.4214e-04 - val_loss: 4.4738e-04
Epoch 10/20
988/988 [==============================] - 8s 9ms/step - loss: 6.7409e-04 - val_loss: 6.6257e-04
Epoch 11/20
988/988 [==============================] - 8s 9ms/step - loss: 6.3852e-04 - val_loss: 3.0403e-04
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 5.8242e-04 - val_loss: 2.8021e-04
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 6.4256e-04 - val_loss: 2.1889e-04
Epoch 14/20
988/988 [==============================] - 8s 9ms/step - loss: 6.2696e-04 - val_loss: 6.3492e-05
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 6.5142e-04 - val_loss: 1.9913e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 6.0018e-04 - val_loss: 2.9666e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 5.9647e-04 - val_loss: 4.1748e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 5.5946e-04 - val_loss: 2.7065e-04
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 5.5820e-04 - val_loss: 6.3332e-05
Epoch 20/20
988/988 [==============================] - 9s 9ms/step - loss: 6.0263e-04 - val_loss: 1.9256e-04
Mean Squared Error on the training data: 0.000123
Mean Squared Error on the test data:     0.000466
<matplotlib.figure.Figure at 0x1a3ef9cfd0>
======================================================================================================
===================
Plot: 41 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_44 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_44 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_116 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_44 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 14s 14ms/step - loss: 0.0186 - val_loss: 0.0078
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0068 - val_loss: 0.0039
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0044 - val_loss: 0.0031
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0032 - val_loss: 0.0031
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0023 - val_loss: 0.0038
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0021 - val_loss: 0.0026
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0020 - val_loss: 0.0029
Epoch 8/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0018 - val_loss: 0.0021
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0018 - val_loss: 0.0014
Epoch 10/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0018 - val_loss: 0.0015
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 0.0011
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0017 - val_loss: 0.0011
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0018 - val_loss: 0.0020
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0017 - val_loss: 4.4369e-04
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 4.7615e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0018 - val_loss: 5.7222e-04
Epoch 17/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0018 - val_loss: 0.0010
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 8.1826e-04
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 9.5012e-05
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0017 - val_loss: 1.3027e-04
Mean Squared Error on the training data: 0.000370
Mean Squared Error on the test data:     0.002256
<matplotlib.figure.Figure at 0x1a3f689e80>
======================================================================================================
===================
Plot: 42 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_45 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_45 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_117 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_45 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 14s 14ms/step - loss: 0.0244 - val_loss: 0.0040
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0089 - val_loss: 0.0020
Epoch 3/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0071 - val_loss: 0.0014
Epoch 4/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0051 - val_loss: 0.0022
Epoch 5/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0046 - val_loss: 0.0014
Epoch 6/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0039 - val_loss: 0.0018
Epoch 7/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0042 - val_loss: 0.0013
Epoch 8/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0039 - val_loss: 0.0010
Epoch 9/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0039 - val_loss: 6.5714e-04
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0032 - val_loss: 0.0011
Epoch 11/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0033 - val_loss: 0.0012
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0039 - val_loss: 2.3375e-04
Epoch 13/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0036 - val_loss: 9.4175e-04
Epoch 14/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0037 - val_loss: 2.2182e-05
Epoch 15/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0032 - val_loss: 3.8794e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0038 - val_loss: 6.4714e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0036 - val_loss: 7.6892e-04
Epoch 18/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0034 - val_loss: 0.0011
Epoch 19/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0039 - val_loss: 2.4216e-05
Epoch 20/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0034 - val_loss: 1.7042e-04
Mean Squared Error on the training data: 0.000189
Mean Squared Error on the test data:     0.000124
<matplotlib.figure.Figure at 0x1a41c8e940>
======================================================================================================
===================
Plot: 43 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_46 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_46 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_118 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_46 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 14s 14ms/step - loss: 0.0303 - val_loss: 0.0050
Epoch 2/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0145 - val_loss: 0.0016
Epoch 3/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0102 - val_loss: 8.9452e-04
Epoch 4/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0072 - val_loss: 0.0021
Epoch 5/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0057 - val_loss: 0.0025
Epoch 6/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0050 - val_loss: 6.4340e-04
Epoch 7/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0049 - val_loss: 0.0024
Epoch 8/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0043 - val_loss: 9.3242e-04
Epoch 9/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0038 - val_loss: 0.0013
Epoch 10/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0039 - val_loss: 6.0284e-04
Epoch 11/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0037 - val_loss: 7.2900e-04
Epoch 12/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0033 - val_loss: 0.0020
Epoch 13/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0033 - val_loss: 6.9777e-04
Epoch 14/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0031 - val_loss: 0.0021
Epoch 15/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0034 - val_loss: 5.1754e-04
Epoch 16/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0032 - val_loss: 0.0010
Epoch 17/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0032 - val_loss: 6.4697e-04
Epoch 18/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0033 - val_loss: 5.5597e-04
Epoch 19/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0031 - val_loss: 0.0010
Epoch 20/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0032 - val_loss: 5.9067e-04
Mean Squared Error on the training data: 0.000639
Mean Squared Error on the test data:     0.000613
<matplotlib.figure.Figure at 0x1a2d32e940>
======================================================================================================
===================
Plot: 44 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_47 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_47 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_119 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_47 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 13s 14ms/step - loss: 0.0042 - val_loss: 1.7908e-04
Epoch 2/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0017 - val_loss: 3.3403e-04
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 4.2432e-05
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0010 - val_loss: 9.8841e-05
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0013 - val_loss: 6.9213e-05
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 9.0523e-04 - val_loss: 2.9178e-05
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 9.8839e-04 - val_loss: 5.0590e-04
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0012 - val_loss: 1.9310e-04
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 9.3247e-04 - val_loss: 3.7011e-04
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0012 - val_loss: 1.5394e-04
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 8.5550e-04 - val_loss: 8.8133e-05
Epoch 12/20
988/988 [==============================] - 9s 9ms/step - loss: 8.1313e-04 - val_loss: 3.9252e-05
Epoch 13/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0011 - val_loss: 5.9176e-05
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0011 - val_loss: 2.6771e-05
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 9.7468e-04 - val_loss: 1.3625e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 8.2046e-04 - val_loss: 8.1485e-05
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 9.4427e-04 - val_loss: 1.7743e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 8.8622e-04 - val_loss: 1.2716e-04
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 7.6297e-04 - val_loss: 9.3343e-05
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0010 - val_loss: 3.5429e-05
Mean Squared Error on the training data: 0.000193
Mean Squared Error on the test data:     0.000157
<matplotlib.figure.Figure at 0x1a387d83c8>
======================================================================================================
===================
Plot: 45 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_48 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_48 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_120 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_48 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 14s 14ms/step - loss: 0.0107 - val_loss: 0.0024
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0036 - val_loss: 6.7507e-04
Epoch 3/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0024 - val_loss: 3.5260e-04
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0018 - val_loss: 8.9137e-04
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 0.0011
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0012 - val_loss: 0.0010
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0017 - val_loss: 0.0016
Epoch 8/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0015 - val_loss: 0.0010
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0014 - val_loss: 7.8587e-04
Epoch 10/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0014 - val_loss: 4.9127e-04
Epoch 11/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0014 - val_loss: 4.7030e-04
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0012 - val_loss: 1.7656e-04
Epoch 13/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0015 - val_loss: 2.4133e-04
Epoch 14/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0013 - val_loss: 9.7346e-05
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0013 - val_loss: 3.3605e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0012 - val_loss: 3.1372e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0014 - val_loss: 7.1415e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0013 - val_loss: 1.4666e-04
Epoch 19/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0012 - val_loss: 1.0526e-04
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0013 - val_loss: 3.6602e-05
Mean Squared Error on the training data: 0.000145
Mean Squared Error on the test data:     0.000208
<matplotlib.figure.Figure at 0x1a3f206048>
======================================================================================================
===================
Plot: 46 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_49 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_49 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_121 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_49 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 13s 14ms/step - loss: 0.0268 - val_loss: 3.4993e-04
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0080 - val_loss: 3.4349e-04
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0056 - val_loss: 2.4974e-04
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0044 - val_loss: 1.6940e-04
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0036 - val_loss: 1.8540e-04
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0032 - val_loss: 5.8291e-04
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0029 - val_loss: 2.0612e-04
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0025 - val_loss: 1.8375e-04
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0029 - val_loss: 2.4787e-04
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0028 - val_loss: 2.4596e-04
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0025 - val_loss: 1.0994e-04
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0024 - val_loss: 1.3892e-04
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0027 - val_loss: 8.6638e-05
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0026 - val_loss: 7.0031e-05
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0025 - val_loss: 5.9698e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0028 - val_loss: 2.1872e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0023 - val_loss: 3.6413e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0024 - val_loss: 7.1247e-05
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0023 - val_loss: 1.4810e-04
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0024 - val_loss: 8.6730e-05
Mean Squared Error on the training data: 0.000436
Mean Squared Error on the test data:     0.000189
<matplotlib.figure.Figure at 0x1a44128748>
======================================================================================================
===================
Plot: 47 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_50 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_50 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_122 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_50 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 14s 14ms/step - loss: 0.0217 - val_loss: 1.8012e-04
Epoch 2/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0081 - val_loss: 2.4460e-04
Epoch 3/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0065 - val_loss: 2.4863e-04
Epoch 4/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0054 - val_loss: 2.8162e-04
Epoch 5/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0049 - val_loss: 2.9765e-04
Epoch 6/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0046 - val_loss: 8.5273e-04
Epoch 7/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0050 - val_loss: 6.7871e-04
Epoch 8/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0047 - val_loss: 1.9821e-04
Epoch 9/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0046 - val_loss: 3.1714e-04
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0043 - val_loss: 9.5361e-04
Epoch 11/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0044 - val_loss: 4.7265e-04
Epoch 12/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0044 - val_loss: 1.2546e-04
Epoch 13/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0049 - val_loss: 5.8431e-04
Epoch 14/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0045 - val_loss: 6.7784e-06
Epoch 15/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0043 - val_loss: 2.4726e-04
Epoch 16/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0047 - val_loss: 3.0894e-04
Epoch 17/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0046 - val_loss: 4.2964e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0044 - val_loss: 0.0010
Epoch 19/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0046 - val_loss: 1.6420e-04
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0045 - val_loss: 2.0842e-05
Mean Squared Error on the training data: 0.000157
Mean Squared Error on the test data:     0.000318
<matplotlib.figure.Figure at 0x1a41b5a048>
======================================================================================================
===================
Plot: 48 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_51 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_51 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_123 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_51 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 14s 14ms/step - loss: 0.0231 - val_loss: 2.0089e-04
Epoch 2/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0075 - val_loss: 1.4982e-04
Epoch 3/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0062 - val_loss: 1.0953e-04
Epoch 4/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0051 - val_loss: 3.2115e-04
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0049 - val_loss: 5.2024e-05
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0045 - val_loss: 4.2430e-04
Epoch 7/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0052 - val_loss: 4.4194e-04
Epoch 8/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0046 - val_loss: 2.2158e-04
Epoch 9/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0043 - val_loss: 1.8011e-04
Epoch 10/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0040 - val_loss: 5.4196e-04
Epoch 11/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0041 - val_loss: 2.9540e-04
Epoch 12/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0041 - val_loss: 5.2823e-05
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0048 - val_loss: 2.9981e-04
Epoch 14/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0045 - val_loss: 2.7845e-05
Epoch 15/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0042 - val_loss: 2.2843e-04
Epoch 16/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0046 - val_loss: 1.7867e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0041 - val_loss: 2.0922e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0041 - val_loss: 6.2123e-04
Epoch 19/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0046 - val_loss: 1.2344e-04
Epoch 20/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0044 - val_loss: 1.5922e-05
Mean Squared Error on the training data: 0.000227
Mean Squared Error on the test data:     0.000655
<matplotlib.figure.Figure at 0x1a368c14a8>
======================================================================================================
===================
Plot: 49 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_52 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_52 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_124 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_52 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 14s 14ms/step - loss: 0.0134 - val_loss: 0.0030
Epoch 2/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0047 - val_loss: 0.0021
Epoch 3/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0031 - val_loss: 8.5788e-04
Epoch 4/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0022 - val_loss: 0.0018
Epoch 5/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0018 - val_loss: 0.0033
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 0.0021
Epoch 7/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0020 - val_loss: 0.0035
Epoch 8/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0017 - val_loss: 0.0038
Epoch 9/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0016 - val_loss: 0.0022
Epoch 10/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0016 - val_loss: 0.0021
Epoch 11/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0015 - val_loss: 0.0027
Epoch 12/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0014 - val_loss: 0.0017
Epoch 13/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0016 - val_loss: 0.0021
Epoch 14/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0015 - val_loss: 7.0695e-04
Epoch 15/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0016 - val_loss: 0.0020
Epoch 16/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0015 - val_loss: 0.0011
Epoch 17/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0014 - val_loss: 0.0033
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0014 - val_loss: 0.0016
Epoch 19/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0014 - val_loss: 7.8817e-04
Epoch 20/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0016 - val_loss: 0.0010
Mean Squared Error on the training data: 0.000307
Mean Squared Error on the test data:     0.000399
<matplotlib.figure.Figure at 0x1a345efac8>
======================================================================================================
===================
Plot: 50 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_53 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_53 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_125 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_53 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 14s 14ms/step - loss: 0.0169 - val_loss: 0.0052
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0062 - val_loss: 0.0041
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0042 - val_loss: 0.0016
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0036 - val_loss: 0.0023
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0030 - val_loss: 0.0028
Epoch 6/20
988/988 [==============================] - ETA: 0s - loss: 0.002 - 8s 8ms/step - loss: 0.0028 - val_loss: 0.0027
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0026 - val_loss: 0.0019
Epoch 8/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0023 - val_loss: 0.0013
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0024 - val_loss: 0.0018
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0026 - val_loss: 0.0014
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0022 - val_loss: 0.0020
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0023 - val_loss: 7.6495e-04
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0024 - val_loss: 0.0028
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0022 - val_loss: 5.8886e-04
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0023 - val_loss: 6.6931e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0024 - val_loss: 9.3401e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0023 - val_loss: 0.0010
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0021 - val_loss: 0.0018
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0023 - val_loss: 8.1892e-04
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0024 - val_loss: 7.4143e-04
Mean Squared Error on the training data: 0.000590
Mean Squared Error on the test data:     0.001675
<matplotlib.figure.Figure at 0x1a35e03be0>
======================================================================================================
===================
Plot: 51 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_54 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_54 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_126 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_54 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 14s 15ms/step - loss: 0.0042 - val_loss: 0.0010
Epoch 2/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0014 - val_loss: 3.9016e-04
Epoch 3/20
988/988 [==============================] - 9s 9ms/step - loss: 8.5729e-04 - val_loss: 1.4751e-04
Epoch 4/20
988/988 [==============================] - 9s 9ms/step - loss: 6.5397e-04 - val_loss: 6.0613e-05
Epoch 5/20
988/988 [==============================] - 9s 9ms/step - loss: 6.2560e-04 - val_loss: 2.5605e-05
Epoch 6/20
988/988 [==============================] - 9s 9ms/step - loss: 5.6590e-04 - val_loss: 2.7241e-04
Epoch 7/20
988/988 [==============================] - 9s 9ms/step - loss: 5.9371e-04 - val_loss: 3.1551e-04
Epoch 8/20
988/988 [==============================] - 9s 9ms/step - loss: 5.6224e-04 - val_loss: 5.4004e-06
Epoch 9/20
988/988 [==============================] - 9s 9ms/step - loss: 5.9607e-04 - val_loss: 1.7531e-04
Epoch 10/20
988/988 [==============================] - 8s 9ms/step - loss: 5.3616e-04 - val_loss: 2.1297e-04
Epoch 11/20
988/988 [==============================] - 9s 9ms/step - loss: 4.7349e-04 - val_loss: 2.4208e-04
Epoch 12/20
988/988 [==============================] - 8s 9ms/step - loss: 5.1600e-04 - val_loss: 6.4766e-06
Epoch 13/20
988/988 [==============================] - 9s 9ms/step - loss: 5.8935e-04 - val_loss: 3.7876e-04
Epoch 14/20
988/988 [==============================] - 8s 9ms/step - loss: 5.5493e-04 - val_loss: 1.6417e-05
Epoch 15/20
988/988 [==============================] - 9s 9ms/step - loss: 5.3509e-04 - val_loss: 1.3080e-04
Epoch 16/20
988/988 [==============================] - 9s 9ms/step - loss: 5.7018e-04 - val_loss: 1.6315e-04
Epoch 17/20
988/988 [==============================] - 9s 9ms/step - loss: 5.5299e-04 - val_loss: 1.9092e-04
Epoch 18/20
988/988 [==============================] - 9s 9ms/step - loss: 5.1209e-04 - val_loss: 2.9676e-04
Epoch 19/20
988/988 [==============================] - 9s 9ms/step - loss: 5.8242e-04 - val_loss: 1.8957e-05
Epoch 20/20
988/988 [==============================] - 9s 9ms/step - loss: 5.6224e-04 - val_loss: 1.9793e-05
Mean Squared Error on the training data: 0.000042
Mean Squared Error on the test data:     0.000196
<matplotlib.figure.Figure at 0x1a41f910b8>
======================================================================================================
===================
Plot: 52 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_55 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_55 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_127 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_55 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 15s 15ms/step - loss: 0.0119 - val_loss: 0.0016
Epoch 2/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0040 - val_loss: 8.2795e-04
Epoch 3/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0028 - val_loss: 3.9288e-04
Epoch 4/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0021 - val_loss: 6.4482e-04
Epoch 5/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0020 - val_loss: 6.6890e-04
Epoch 6/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0017 - val_loss: 0.0014
Epoch 7/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0020 - val_loss: 0.0013
Epoch 8/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0018 - val_loss: 3.8899e-04
Epoch 9/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0018 - val_loss: 8.7895e-04
Epoch 10/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0017 - val_loss: 0.0011
Epoch 11/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0015 - val_loss: 0.0010
Epoch 12/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0017 - val_loss: 2.9620e-04
Epoch 13/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0018 - val_loss: 7.8204e-04
Epoch 14/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0017 - val_loss: 3.6301e-05
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 5.3779e-04
Epoch 16/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0018 - val_loss: 5.6199e-04
Epoch 17/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0017 - val_loss: 0.0011
Epoch 18/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0016 - val_loss: 0.0010
Epoch 19/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0018 - val_loss: 2.6059e-04
Epoch 20/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0017 - val_loss: 2.3580e-04
Mean Squared Error on the training data: 0.000199
Mean Squared Error on the test data:     0.000369
<matplotlib.figure.Figure at 0x1a471e4048>
======================================================================================================
===================
Plot: 53 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_56 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_56 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_128 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_56 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 15s 15ms/step - loss: 0.0321 - val_loss: 0.0075
Epoch 2/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0105 - val_loss: 0.0055
Epoch 3/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0068 - val_loss: 0.0047
Epoch 4/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0065 - val_loss: 0.0046
Epoch 5/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0048 - val_loss: 0.0048
Epoch 6/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0041 - val_loss: 0.0046
Epoch 7/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0037 - val_loss: 0.0036
Epoch 8/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0032 - val_loss: 0.0040
Epoch 9/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0037 - val_loss: 0.0026
Epoch 10/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0032 - val_loss: 0.0023
Epoch 11/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0031 - val_loss: 0.0019
Epoch 12/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0032 - val_loss: 0.0011
Epoch 13/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0034 - val_loss: 0.0019
Epoch 14/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0031 - val_loss: 0.0018
Epoch 15/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0033 - val_loss: 0.0013
Epoch 16/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0039 - val_loss: 8.8469e-04
Epoch 17/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0032 - val_loss: 8.7428e-04
Epoch 18/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0035 - val_loss: 5.7152e-04
Epoch 19/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0030 - val_loss: 1.9050e-04
Epoch 20/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0031 - val_loss: 1.9523e-04
Mean Squared Error on the training data: 0.001069
Mean Squared Error on the test data:     0.001251
<matplotlib.figure.Figure at 0x1a493612e8>
======================================================================================================
===================
Plot: 54 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_57 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_57 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_129 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_57 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 15s 15ms/step - loss: 0.0223 - val_loss: 2.6762e-04
Epoch 2/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0078 - val_loss: 1.7359e-04
Epoch 3/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0062 - val_loss: 1.3964e-04
Epoch 4/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0046 - val_loss: 2.5083e-04
Epoch 5/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0042 - val_loss: 1.1760e-04
Epoch 6/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0039 - val_loss: 6.4648e-04
Epoch 7/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0046 - val_loss: 7.1336e-04
Epoch 8/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0040 - val_loss: 3.6813e-04
Epoch 9/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0039 - val_loss: 2.0045e-04
Epoch 10/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0035 - val_loss: 2.7685e-04
Epoch 11/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0037 - val_loss: 4.4919e-04
Epoch 12/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0035 - val_loss: 4.4453e-05
Epoch 13/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0040 - val_loss: 2.1557e-04
Epoch 14/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0038 - val_loss: 2.2212e-04
Epoch 15/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0036 - val_loss: 1.3805e-04
Epoch 16/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0039 - val_loss: 2.1926e-04
Epoch 17/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0037 - val_loss: 6.0570e-04
Epoch 18/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0035 - val_loss: 3.5461e-04
Epoch 19/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0039 - val_loss: 4.3713e-05
Epoch 20/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0038 - val_loss: 1.3376e-05
Mean Squared Error on the training data: 0.000213
Mean Squared Error on the test data:     0.000392
<matplotlib.figure.Figure at 0x1a4a32b6d8>
======================================================================================================
===================
Plot: 55 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_58 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_58 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_130 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_58 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 15s 15ms/step - loss: 0.0253 - val_loss: 5.6085e-04
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0085 - val_loss: 3.4963e-04
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0062 - val_loss: 1.6207e-04
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0063 - val_loss: 1.9262e-04
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0046 - val_loss: 3.2079e-04
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0045 - val_loss: 5.2436e-05
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0043 - val_loss: 5.5342e-05
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0043 - val_loss: 3.5966e-04
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0037 - val_loss: 6.3503e-05
Epoch 10/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0040 - val_loss: 4.9196e-05
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0039 - val_loss: 5.6708e-05
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0038 - val_loss: 1.6394e-04
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0040 - val_loss: 1.1548e-04
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0035 - val_loss: 5.4960e-05
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0040 - val_loss: 4.3692e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0042 - val_loss: 4.2963e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0039 - val_loss: 5.0845e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0039 - val_loss: 3.6772e-05
Epoch 19/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0039 - val_loss: 4.4005e-05
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0039 - val_loss: 1.6149e-04
Mean Squared Error on the training data: 0.000400
Mean Squared Error on the test data:     0.000065
<matplotlib.figure.Figure at 0x1a3aaa40f0>
======================================================================================================
===================
Plot: 56 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_59 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_59 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_131 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_59 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 15s 15ms/step - loss: 0.0131 - val_loss: 1.0129e-04
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0045 - val_loss: 2.2955e-04
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0037 - val_loss: 5.5941e-05
Epoch 4/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0032 - val_loss: 2.1029e-04
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0028 - val_loss: 1.8787e-05
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0027 - val_loss: 4.1412e-04
Epoch 7/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0036 - val_loss: 6.7364e-04
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0031 - val_loss: 3.5639e-04
Epoch 9/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0026 - val_loss: 2.1149e-04
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0029 - val_loss: 2.3606e-04
Epoch 11/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0027 - val_loss: 2.4588e-04
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0023 - val_loss: 6.2719e-05
Epoch 13/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0032 - val_loss: 1.6278e-04
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0028 - val_loss: 1.5117e-05
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0029 - val_loss: 9.4156e-05
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0029 - val_loss: 1.2961e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0027 - val_loss: 3.1783e-04
Epoch 18/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0027 - val_loss: 2.6321e-04
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0028 - val_loss: 6.7130e-05
Epoch 20/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0031 - val_loss: 7.5603e-06
Mean Squared Error on the training data: 0.000143
Mean Squared Error on the test data:     0.001042
<matplotlib.figure.Figure at 0x1a3465f080>
======================================================================================================
===================
Plot: 57 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_60 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_60 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_132 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_60 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 15s 15ms/step - loss: 0.0191 - val_loss: 0.0021
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0064 - val_loss: 0.0053
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0049 - val_loss: 0.0044
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0040 - val_loss: 0.0012
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0034 - val_loss: 4.3107e-04
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0029 - val_loss: 0.0051
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0027 - val_loss: 0.0010
Epoch 8/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0027 - val_loss: 7.5968e-04
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0026 - val_loss: 0.0028
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0026 - val_loss: 0.0025
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0024 - val_loss: 0.0024
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0024 - val_loss: 0.0036
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0028 - val_loss: 0.0081
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0024 - val_loss: 0.0026
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0026 - val_loss: 0.0015
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0028 - val_loss: 0.0029
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0025 - val_loss: 0.0025
Epoch 18/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0025 - val_loss: 0.0082
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0027 - val_loss: 0.0056
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0025 - val_loss: 0.0045
Mean Squared Error on the training data: 0.000675
Mean Squared Error on the test data:     0.000119
<matplotlib.figure.Figure at 0x1a2f3e4048>
======================================================================================================
===================
Plot: 58 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_61 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_61 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_133 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_61 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 15s 15ms/step - loss: 0.0274 - val_loss: 3.6901e-04
Epoch 2/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0099 - val_loss: 0.0011
Epoch 3/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0068 - val_loss: 0.0071
Epoch 4/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0061 - val_loss: 8.0292e-04
Epoch 5/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0046 - val_loss: 3.8815e-04
Epoch 6/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0046 - val_loss: 0.0037
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0042 - val_loss: 0.0027
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0040 - val_loss: 4.4420e-04
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0042 - val_loss: 0.0014
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0036 - val_loss: 0.0015
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0034 - val_loss: 8.8172e-04
Epoch 12/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0036 - val_loss: 3.5114e-04
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0039 - val_loss: 0.0030
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0037 - val_loss: 9.7197e-04
Epoch 15/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0038 - val_loss: 6.5871e-04
Epoch 16/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0045 - val_loss: 4.3901e-04
Epoch 17/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0036 - val_loss: 2.5720e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0038 - val_loss: 0.0071
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0041 - val_loss: 0.0011
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0038 - val_loss: 0.0024
Mean Squared Error on the training data: 0.000883
Mean Squared Error on the test data:     0.000501
<matplotlib.figure.Figure at 0x1a3f15efd0>
======================================================================================================
===================
Plot: 59 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_62 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_62 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_134 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_62 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 15s 16ms/step - loss: 0.0128 - val_loss: 3.9431e-04
Epoch 2/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0049 - val_loss: 3.5906e-04
Epoch 3/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0035 - val_loss: 1.4559e-04
Epoch 4/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0031 - val_loss: 1.1720e-04
Epoch 5/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0027 - val_loss: 8.2408e-05
Epoch 6/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0025 - val_loss: 6.0280e-04
Epoch 7/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0030 - val_loss: 5.8031e-04
Epoch 8/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0027 - val_loss: 2.4676e-05
Epoch 9/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0026 - val_loss: 2.8327e-04
Epoch 10/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0026 - val_loss: 3.7211e-04
Epoch 11/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0027 - val_loss: 3.5806e-04
Epoch 12/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0025 - val_loss: 7.0046e-05
Epoch 13/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0029 - val_loss: 4.0066e-04
Epoch 14/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0025 - val_loss: 2.0137e-05
Epoch 15/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0027 - val_loss: 1.4089e-04
Epoch 16/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0027 - val_loss: 3.4530e-05
Epoch 17/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0027 - val_loss: 4.1357e-04
Epoch 18/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0027 - val_loss: 5.4751e-04
Epoch 19/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0025 - val_loss: 8.4761e-05
Epoch 20/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0028 - val_loss: 6.6656e-05
Mean Squared Error on the training data: 0.000141
Mean Squared Error on the test data:     0.000377
<matplotlib.figure.Figure at 0x1a4cd05390>
======================================================================================================
===================
Plot: 60 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_63 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_63 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_135 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_63 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 16s 16ms/step - loss: 0.0297 - val_loss: 0.0090
Epoch 2/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0097 - val_loss: 0.0055
Epoch 3/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0065 - val_loss: 0.0043
Epoch 4/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0053 - val_loss: 0.0055
Epoch 5/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0036 - val_loss: 0.0054
Epoch 6/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0033 - val_loss: 0.0057
Epoch 7/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0026 - val_loss: 0.0032
Epoch 8/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0023 - val_loss: 0.0045
Epoch 9/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0025 - val_loss: 0.0030
Epoch 10/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0024 - val_loss: 0.0029
Epoch 11/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0022 - val_loss: 0.0019
Epoch 12/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0019 - val_loss: 0.0031
Epoch 13/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0022 - val_loss: 9.4250e-04
Epoch 14/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0021 - val_loss: 0.0016
Epoch 15/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0019 - val_loss: 9.5324e-04
Epoch 16/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0025 - val_loss: 6.9056e-04
Epoch 17/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0021 - val_loss: 5.4290e-04
Epoch 18/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0020 - val_loss: 7.7276e-04
Epoch 19/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0020 - val_loss: 0.0014
Epoch 20/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0020 - val_loss: 4.2057e-04
Mean Squared Error on the training data: 0.000899
Mean Squared Error on the test data:     0.000574
<matplotlib.figure.Figure at 0x1a4e88b400>
======================================================================================================
===================
Plot: 61 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_64 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_64 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_136 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_64 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 16s 16ms/step - loss: 0.0199 - val_loss: 0.0026
Epoch 2/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0059 - val_loss: 0.0011
Epoch 3/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0042 - val_loss: 0.0012
Epoch 4/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0032 - val_loss: 0.0014
Epoch 5/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0028 - val_loss: 0.0015
Epoch 6/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0026 - val_loss: 0.0010
Epoch 7/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0020 - val_loss: 6.2030e-04
Epoch 8/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0019 - val_loss: 9.3130e-04
Epoch 9/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0022 - val_loss: 9.3244e-04
Epoch 10/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0020 - val_loss: 9.6756e-04
Epoch 11/20
988/988 [==============================] - ETA: 0s - loss: 0.001 - 8s 9ms/step - loss: 0.0019 - val_loss: 5.2667e-04
Epoch 12/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0018 - val_loss: 4.7463e-04
Epoch 13/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0022 - val_loss: 9.6270e-04
Epoch 14/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0018 - val_loss: 6.7980e-04
Epoch 15/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0019 - val_loss: 2.9918e-04
Epoch 16/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0021 - val_loss: 4.1194e-04
Epoch 17/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0017 - val_loss: 4.3645e-04
Epoch 18/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0016 - val_loss: 4.7836e-04
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0019 - val_loss: 2.3288e-04
Epoch 20/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0019 - val_loss: 2.5348e-04
Mean Squared Error on the training data: 0.000311
Mean Squared Error on the test data:     0.000515
<matplotlib.figure.Figure at 0x1a4563ad68>
======================================================================================================
===================
Plot: 62 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_65 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_65 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_137 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_65 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 16s 16ms/step - loss: 0.0087 - val_loss: 4.2065e-04
Epoch 2/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0029 - val_loss: 3.5510e-04
Epoch 3/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0022 - val_loss: 1.0324e-04
Epoch 4/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0020 - val_loss: 3.3754e-04
Epoch 5/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0017 - val_loss: 1.9617e-04
Epoch 6/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0017 - val_loss: 7.5699e-04
Epoch 7/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0019 - val_loss: 6.3104e-04
Epoch 8/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0017 - val_loss: 3.0610e-04
Epoch 9/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0017 - val_loss: 3.8157e-04
Epoch 10/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0016 - val_loss: 3.9467e-04
Epoch 11/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0015 - val_loss: 3.6488e-04
Epoch 12/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0016 - val_loss: 1.3129e-04
Epoch 13/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0018 - val_loss: 5.4208e-04
Epoch 14/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0017 - val_loss: 1.1962e-04
Epoch 15/20
988/988 [==============================] - ETA: 0s - loss: 0.001 - 9s 9ms/step - loss: 0.0017 - val_loss: 5.7352e-05
Epoch 16/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0017 - val_loss: 2.9891e-04
Epoch 17/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0016 - val_loss: 3.5457e-04
Epoch 18/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0015 - val_loss: 7.8780e-04
Epoch 19/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0017 - val_loss: 1.7536e-04
Epoch 20/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0017 - val_loss: 2.7206e-04
Mean Squared Error on the training data: 0.000209
Mean Squared Error on the test data:     0.000468
<matplotlib.figure.Figure at 0x1a45152978>
======================================================================================================
===================
Plot: 63 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_66 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_66 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_138 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_66 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 16s 16ms/step - loss: 0.0192 - val_loss: 5.4254e-04
Epoch 2/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0071 - val_loss: 5.0546e-04
Epoch 3/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0050 - val_loss: 2.7383e-04
Epoch 4/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0036 - val_loss: 3.4611e-04
Epoch 5/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0035 - val_loss: 2.3985e-04
Epoch 6/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0027 - val_loss: 2.3811e-04
Epoch 7/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0030 - val_loss: 1.8545e-04
Epoch 8/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0028 - val_loss: 1.6742e-04
Epoch 9/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0029 - val_loss: 2.2112e-04
Epoch 10/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0023 - val_loss: 4.1343e-04
Epoch 11/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0025 - val_loss: 4.4009e-04
Epoch 12/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0028 - val_loss: 1.0864e-04
Epoch 13/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0026 - val_loss: 2.4700e-04
Epoch 14/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0027 - val_loss: 5.0179e-04
Epoch 15/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0022 - val_loss: 1.8315e-04
Epoch 16/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0027 - val_loss: 2.4019e-04
Epoch 17/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0025 - val_loss: 1.1507e-04
Epoch 18/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0023 - val_loss: 5.9298e-04
Epoch 19/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0029 - val_loss: 8.9198e-05
Epoch 20/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0026 - val_loss: 9.5755e-05
Mean Squared Error on the training data: 0.000286
Mean Squared Error on the test data:     0.000175
<matplotlib.figure.Figure at 0x1a2f64c0f0>
======================================================================================================
===================
Plot: 64 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_67 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_67 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_139 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_67 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 16s 16ms/step - loss: 0.0239 - val_loss: 0.0017
Epoch 2/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0093 - val_loss: 0.0015
Epoch 3/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0067 - val_loss: 0.0012
Epoch 4/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0054 - val_loss: 9.3164e-04
Epoch 5/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0048 - val_loss: 8.7418e-04
Epoch 6/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0044 - val_loss: 0.0012
Epoch 7/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0043 - val_loss: 6.4951e-04
Epoch 8/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0043 - val_loss: 1.7464e-04
Epoch 9/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0042 - val_loss: 7.3297e-04
Epoch 10/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0037 - val_loss: 9.0053e-04
Epoch 11/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0035 - val_loss: 0.0012
Epoch 12/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0041 - val_loss: 2.6018e-04
Epoch 13/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0040 - val_loss: 2.9583e-04
Epoch 14/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0038 - val_loss: 5.9374e-05
Epoch 15/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0036 - val_loss: 2.3358e-04
Epoch 16/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0042 - val_loss: 5.8587e-04
Epoch 17/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0037 - val_loss: 6.6217e-04
Epoch 18/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0038 - val_loss: 9.5754e-04
Epoch 19/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0043 - val_loss: 1.6215e-05
Epoch 20/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0038 - val_loss: 3.8388e-04
Mean Squared Error on the training data: 0.000425
Mean Squared Error on the test data:     0.000215
<matplotlib.figure.Figure at 0x1a2c523588>
======================================================================================================
===================
Plot: 65 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_68 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_68 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_140 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_68 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 16s 17ms/step - loss: 0.0081 - val_loss: 0.0012
Epoch 2/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0028 - val_loss: 0.0011
Epoch 3/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0022 - val_loss: 1.6415e-04
Epoch 4/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0016 - val_loss: 7.7602e-04
Epoch 5/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0015 - val_loss: 3.1250e-04
Epoch 6/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 7.1189e-04
Epoch 7/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0016 - val_loss: 6.9946e-04
Epoch 8/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 5.2860e-04
Epoch 9/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0015 - val_loss: 9.7285e-04
Epoch 10/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 0.0012
Epoch 11/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 3.4939e-04
Epoch 12/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0014 - val_loss: 4.6386e-04
Epoch 13/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 6.3860e-04
Epoch 14/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0015 - val_loss: 0.0011
Epoch 15/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0015 - val_loss: 2.9817e-04
Epoch 16/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0014 - val_loss: 2.1836e-04
Epoch 17/20
988/988 [==============================] - 8s 9ms/step - loss: 0.0015 - val_loss: 3.2538e-04
Epoch 18/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0013 - val_loss: 0.0013
Epoch 19/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0013 - val_loss: 2.0789e-04
Epoch 20/20
988/988 [==============================] - 8s 8ms/step - loss: 0.0014 - val_loss: 5.9344e-04
Mean Squared Error on the training data: 0.000258
Mean Squared Error on the test data:     0.000367
<matplotlib.figure.Figure at 0x1a4935b358>
======================================================================================================
===================
Plot: 66 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_69 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_69 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_141 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_69 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 17s 17ms/step - loss: 0.0124 - val_loss: 0.0050
Epoch 2/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0049 - val_loss: 0.0042
Epoch 3/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0030 - val_loss: 0.0014
Epoch 4/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0022 - val_loss: 0.0020
Epoch 5/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0021 - val_loss: 0.0023
Epoch 6/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0016 - val_loss: 0.0022
Epoch 7/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0019 - val_loss: 0.0030
Epoch 8/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0015 - val_loss: 0.0035
Epoch 9/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0016 - val_loss: 0.0035
Epoch 10/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0016 - val_loss: 0.0018
Epoch 11/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0015 - val_loss: 0.0015
Epoch 12/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0014 - val_loss: 0.0014
Epoch 13/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0015 - val_loss: 0.0014
Epoch 14/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0014 - val_loss: 9.7511e-04
Epoch 15/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0015 - val_loss: 0.0014
Epoch 16/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0015 - val_loss: 0.0014
Epoch 17/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0014 - val_loss: 0.0024
Epoch 18/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0014 - val_loss: 0.0010
Epoch 19/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0014 - val_loss: 3.0330e-04
Epoch 20/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0015 - val_loss: 8.7478e-04
Mean Squared Error on the training data: 0.000649
Mean Squared Error on the test data:     0.000715
<matplotlib.figure.Figure at 0x1a50b184e0>
======================================================================================================
===================
Plot: 67 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_70 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_70 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_142 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_70 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 17s 17ms/step - loss: 0.0266 - val_loss: 0.0024
Epoch 2/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0091 - val_loss: 7.9109e-04
Epoch 3/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0064 - val_loss: 4.0494e-04
Epoch 4/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0051 - val_loss: 7.6942e-04
Epoch 5/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0040 - val_loss: 3.3927e-04
Epoch 6/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0035 - val_loss: 3.3428e-04
Epoch 7/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0034 - val_loss: 0.0011
Epoch 8/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0026 - val_loss: 8.2501e-04
Epoch 9/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0027 - val_loss: 2.2520e-04
Epoch 10/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0028 - val_loss: 5.9751e-04
Epoch 11/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0027 - val_loss: 3.5295e-04
Epoch 12/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0026 - val_loss: 2.3702e-04
Epoch 13/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0027 - val_loss: 5.8964e-04
Epoch 14/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0024 - val_loss: 6.2845e-04
Epoch 15/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0025 - val_loss: 0.0012
Epoch 16/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0027 - val_loss: 8.0486e-04
Epoch 17/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0026 - val_loss: 4.3577e-04
Epoch 18/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0026 - val_loss: 3.2652e-04
Epoch 19/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0023 - val_loss: 1.8897e-04
Epoch 20/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0027 - val_loss: 2.3929e-04
Mean Squared Error on the training data: 0.000501
Mean Squared Error on the test data:     0.001742
<matplotlib.figure.Figure at 0x1a5367ff60>
======================================================================================================
===================
Plot: 68 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_71 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_71 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_143 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_71 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 17s 17ms/step - loss: 0.0072 - val_loss: 0.0010
Epoch 2/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0025 - val_loss: 0.0011
Epoch 3/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0018 - val_loss: 2.6817e-04
Epoch 4/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0016 - val_loss: 7.4392e-04
Epoch 5/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0013 - val_loss: 5.9522e-04
Epoch 6/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0012 - val_loss: 9.5996e-04
Epoch 7/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0017 - val_loss: 0.0015
Epoch 8/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0014 - val_loss: 9.2230e-04
Epoch 9/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0013 - val_loss: 8.5501e-04
Epoch 10/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0013 - val_loss: 0.0012
Epoch 11/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0013 - val_loss: 0.0013
Epoch 12/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0012 - val_loss: 3.2204e-04
Epoch 13/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0014 - val_loss: 5.2619e-04
Epoch 14/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0013 - val_loss: 1.6941e-04
Epoch 15/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0012 - val_loss: 6.3507e-04
Epoch 16/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0012 - val_loss: 5.3847e-04
Epoch 17/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0014 - val_loss: 6.2624e-04
Epoch 18/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0013 - val_loss: 8.3076e-04
Epoch 19/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0013 - val_loss: 2.7748e-04
Epoch 20/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0014 - val_loss: 3.8590e-04
Mean Squared Error on the training data: 0.000257
Mean Squared Error on the test data:     0.000397
<matplotlib.figure.Figure at 0x1a514237f0>
======================================================================================================
===================
Plot: 69 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_72 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_72 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_144 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_72 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 16s 17ms/step - loss: 0.0088 - val_loss: 0.0040
Epoch 2/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0033 - val_loss: 0.0021
Epoch 3/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0021 - val_loss: 9.1104e-04
Epoch 4/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0018 - val_loss: 0.0016
Epoch 5/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0014 - val_loss: 0.0012
Epoch 6/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0014 - val_loss: 0.0018
Epoch 7/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0016 - val_loss: 0.0016
Epoch 8/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0012 - val_loss: 0.0019
Epoch 9/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0013 - val_loss: 0.0015
Epoch 10/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0013 - val_loss: 0.0010
Epoch 11/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0012 - val_loss: 0.0013
Epoch 12/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0010 - val_loss: 9.8483e-04
Epoch 13/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0013 - val_loss: 6.4069e-04
Epoch 14/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0012 - val_loss: 1.9928e-04
Epoch 15/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0012 - val_loss: 6.2632e-04
Epoch 16/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0013 - val_loss: 5.4941e-04
Epoch 17/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0011 - val_loss: 0.0015
Epoch 18/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0010 - val_loss: 2.5256e-04
Epoch 19/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0011 - val_loss: 1.6972e-04
Epoch 20/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0012 - val_loss: 3.8562e-04
Mean Squared Error on the training data: 0.000266
Mean Squared Error on the test data:     0.000511
<matplotlib.figure.Figure at 0x1a49479518>
======================================================================================================
===================
Plot: 70 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_73 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_73 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_145 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_73 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 17s 17ms/step - loss: 0.0116 - val_loss: 6.2346e-04
Epoch 2/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0043 - val_loss: 4.3424e-04
Epoch 3/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0036 - val_loss: 4.1002e-04
Epoch 4/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0027 - val_loss: 3.4020e-04
Epoch 5/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0026 - val_loss: 3.3805e-04
Epoch 6/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0023 - val_loss: 6.8455e-04
Epoch 7/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0027 - val_loss: 4.3582e-04
Epoch 8/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0026 - val_loss: 2.7954e-04
Epoch 9/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0025 - val_loss: 2.5810e-04
Epoch 10/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0022 - val_loss: 4.1425e-04
Epoch 11/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0025 - val_loss: 1.6979e-04
Epoch 12/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0026 - val_loss: 1.1244e-04
Epoch 13/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0027 - val_loss: 4.8473e-04
Epoch 14/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0025 - val_loss: 4.4061e-06
Epoch 15/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0021 - val_loss: 1.7902e-04
Epoch 16/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0026 - val_loss: 4.7390e-04
Epoch 17/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0025 - val_loss: 1.5971e-04
Epoch 18/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0024 - val_loss: 5.5728e-04
Epoch 19/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0025 - val_loss: 1.8567e-05
Epoch 20/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0023 - val_loss: 1.9325e-05
Mean Squared Error on the training data: 0.000090
Mean Squared Error on the test data:     0.000600
<matplotlib.figure.Figure at 0x1a420c72b0>
======================================================================================================
===================
Plot: 71 (out of 71)
===================
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_74 (LSTM)               (None, 20)                2240      
_________________________________________________________________
dropout_74 (Dropout)         (None, 20)                0         
_________________________________________________________________
dense_146 (Dense)            (None, 1)                 21        
_________________________________________________________________
activation_74 (Activation)   (None, 1)                 0         
=================================================================
Total params: 2,261
Trainable params: 2,261
Non-trainable params: 0
_________________________________________________________________
Train on 988 samples, validate on 53 samples
Epoch 1/20
988/988 [==============================] - 17s 17ms/step - loss: 0.0157 - val_loss: 0.0011
Epoch 2/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0053 - val_loss: 6.1707e-04
Epoch 3/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0040 - val_loss: 5.4259e-04
Epoch 4/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0032 - val_loss: 8.5653e-04
Epoch 5/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0026 - val_loss: 5.2760e-04
Epoch 6/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0025 - val_loss: 9.3166e-04
Epoch 7/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0029 - val_loss: 0.0012
Epoch 8/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0026 - val_loss: 5.2528e-04
Epoch 9/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0024 - val_loss: 5.2707e-04
Epoch 10/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0021 - val_loss: 4.0970e-04
Epoch 11/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0021 - val_loss: 7.3734e-04
Epoch 12/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0022 - val_loss: 1.2770e-04
Epoch 13/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0025 - val_loss: 2.7090e-04
Epoch 14/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0024 - val_loss: 3.1741e-05
Epoch 15/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0022 - val_loss: 3.6425e-04
Epoch 16/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0025 - val_loss: 9.1076e-04
Epoch 17/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0022 - val_loss: 6.9631e-04
Epoch 18/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0021 - val_loss: 5.5529e-04
Epoch 19/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0024 - val_loss: 1.9037e-04
Epoch 20/20
988/988 [==============================] - 9s 9ms/step - loss: 0.0023 - val_loss: 1.0197e-04
Mean Squared Error on the training data: 0.000275
Mean Squared Error on the test data:     0.000388
<matplotlib.figure.Figure at 0x1a451d56d8>
======================================================================================================
======================================================================================================
Total run time in seconds: 12327

Predicting 10 days ahead with the LSTM

In [71]:
from keras import regularizers

# Define the LSTM model
def LSTM10_model(inputs, output_size, neurons, activ_func="linear",
                dropout=0.2, loss="mean_squared_error", optimizer="rmsprop"):
    model = Sequential()
    
    model.add(LSTM(neurons, input_shape=(inputs.shape[1], inputs.shape[2]), return_sequences=True))
    model.add(Dropout(dropout))
    
    model.add(LSTM(neurons*2, return_sequences=False))
    model.add(Dropout(dropout))
    
    model.add(Dense(units=output_size))
    model.add(Activation(activ_func))
    
    model.compile(loss=loss, optimizer=optimizer)
    
    model.summary()
    return model
In [72]:
def create_LSTM10_dataset(dataset, window, pred_len=10):
    # dataset is an array. window is the number of historical datapoints the predictions 
    # are based on while pred_len is the prediction length.
    dataX, dataY = [], []
    for i in range(len(dataset)-window-pred_len+1):
        dataX.append(dataset[i:(i+window), :])
        dataY.append(dataset[(i + window):(i + window + pred_len), 4])
    return np.array(dataX), np.array(dataY)
In [73]:
# Specify for how many days we want to predict the price by changing the into_the_future parameter.
into_the_future = 10

""""Define the input data for the 10 days LSTM prediction"""
window=10
LSTM10_train_input, LSTM10_train_output = create_LSTM_dataset(LSTM_train_list[0].values, window)
LSTM10_test_input, LSTM10_test_output = create_LSTM_dataset(LSTM_test_list[0].values, window)
In [74]:
'''reshape the input to be [samples, time steps, features]'''
LSTM10_test_input = np.reshape(LSTM10_test_input, (LSTM10_test_input.shape[0], LSTM10_test_input.shape[1], 7))
LSTM10_train_input = np.reshape(LSTM10_train_input, (LSTM10_train_input.shape[0], LSTM10_train_input.shape[1], 7))


print(LSTM10_train_input.shape)
print(LSTM10_test_input.shape)
print('--------------------')
print(LSTM10_train_output.shape)
print(LSTM10_test_output.shape)
(1032, 10, 7)
(250, 10, 7)
--------------------
(1032,)
(250,)
In [75]:
# Random seed for reproducibility
#np.random.seed(17)
np.random.seed(202)

model_10 = LSTM10_model(LSTM10_train_input, output_size = 1, neurons=50)
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_75 (LSTM)               (None, 10, 50)            11600     
_________________________________________________________________
dropout_75 (Dropout)         (None, 10, 50)            0         
_________________________________________________________________
lstm_76 (LSTM)               (None, 100)               60400     
_________________________________________________________________
dropout_76 (Dropout)         (None, 100)               0         
_________________________________________________________________
dense_147 (Dense)            (None, 1)                 101       
_________________________________________________________________
activation_75 (Activation)   (None, 1)                 0         
=================================================================
Total params: 72,101
Trainable params: 72,101
Non-trainable params: 0
_________________________________________________________________
In [76]:
start = time.time()

trained_LSTM10 = model_10.fit(LSTM10_train_input, LSTM10_train_output, 
                              epochs=1, batch_size=2, verbose=1, shuffle=True, validation_split=0.05)

print('Total training time (s): {0:0.0f}'.format(time.time()-start))
Train on 980 samples, validate on 52 samples
Epoch 1/1
980/980 [==============================] - 35s 36ms/step - loss: 0.0031 - val_loss: 0.0024
Total training time (s): 37

Plot the Training error

In [77]:
#plot_error(trained_LSTM10)

trainScore = model_10.evaluate(LSTM10_train_input, LSTM10_train_output, verbose=0)
testScore = model_10.evaluate(LSTM10_test_input, LSTM10_test_output, verbose=0)
print("Mean Squared Error on the training data: {0:0.6f}".format(trainScore)) 
print("Mean Squared Error on the test data:     {0:0.6f}".format(testScore))
Mean Squared Error on the training data: 0.000703
Mean Squared Error on the test data:     0.002080
In [78]:
# Prediction output har alltid 1 column (jag valde ju det när LSTM designades).



# Make predictions for the train set. Then invert the scaling.
#LSTM10_train_pred = model_10.predict(LSTM10_train_input[:-into_the_future])
#print(LSTM10_train_pred.shape)
#LSTM10_train_pred = Un_scale_data(LSTM10_train_pred, tickr)
#LSTM10_train_output = Un_scale_data(LSTM10_train_output, tickr) ##

Performance on the Test Set

Let's focus on what's important and interesting, namely the performance on the test set.

In [79]:
### DESSA TVÅ FUNKTIONER MÅSTE JAG ÄNDRA! JAG HAR KOPIERAT DEM RAKT AV. 

def plot_long_pred(pred_data, true_data, pred_len, title='', xlabel='', ylabel=''):
    """ Plot the predictions stored in pred_data and the true values stored in true_data.
        pred_len is the length of each prediction. """
    index = true_data.index
    fig = plt.figure(figsize=(16, 7), facecolor='white')
    ax = fig.add_subplot(111)
    ax.plot(true_data.values[:, 0][::-1], label='True Data')
    #Pad the list of predictions to shift it in the graph to its correct start
    for i, data in enumerate(pred_data):
        padding = [None for p in range(i * pred_len)]
        #Show the legend only for the first 5 predictions
        if i < 5:
            plt.plot((padding + data), label='Prediction')
            plt.legend()
        else:
            plt.plot(padding + data)
    ax.set_title(label=title, fontsize=20)
    ax.set_xlabel(xlabel, fontsize=15) 
    ax.autoscale(enable=True, axis='x', tight=True)    
    ax.set_ylabel(ylabel, fontsize=15)
    ax.grid(axis='both', alpha=.5)
    ax.xaxis.set_major_locator(MaxNLocator(12))
    ax.xaxis.set_major_formatter(IndexFormatter(index[::-1]))
    plt.setp(ax.get_xticklabels(), rotation=50, fontsize=12)
    plt.setp(ax.get_yticklabels(), fontsize=12)
    plt.show()


def predict_multiple_sequences(model, data, window_size, pred_len):
    """ Make a sequence of predictions of pred_len steps before shifting prediction run forward by pred_len steps."""
    prediction_seqs = []
    for i in range(int(len(data)/pred_len)):
        curr_frame = data[i*pred_len]
        predicted = []
        for j in range(pred_len):
            predicted.append(model.predict(curr_frame[np.newaxis,:,:])[0,0])
            curr_frame = curr_frame[1:]
            curr_frame = np.insert(curr_frame, [window_size-1], predicted[-1], axis=0)
        prediction_seqs.append(predicted)
    return prediction_seqs
In [80]:
tickr = get_ticker(LSTM_train_list[0])

LSTM10_predictions = predict_multiple_sequences(model_10, LSTM10_test_input[::-1], window, into_the_future)


#inv_LSTM10_predictions = copy.deepcopy(pd.DataFrame(LSTM10_predictions).transpose())
#for i in range(len(LSTM10_predictions)):
#    Un_scale_data(inv_LSTM10_predictions.iloc[:, i], tickr)
#    #column = Un_scale_data(copy.deepcopy(column.values), tickr)
##    inv_LSTM10_predictions.append(list1.tolist())
In [81]:
plot_long_pred(LSTM10_predictions, LSTM_test_list[0], into_the_future,
               title='10 day predictions on test set, ' + tickr, xlabel='Date', ylabel='Price')

Find the optimal algorithm for each stock

Run the cell below to find the optimal algorithm/tuning parameters for each stock. (The run will take several 10:s of hours.)

In [82]:
from keras import regularizers

# Define the LSTM model
def LSTM10_model2(inputs, output_size, neurons, activ_func="linear",
                dropout=0.2, loss="mean_squared_error", optimizer="rmsprop"):
    model = Sequential()

    model.add(LSTM(neurons, input_shape=(inputs.shape[1], inputs.shape[2]), return_sequences=True))
    model.add(Dropout(dropout))
    
    model.add(LSTM(neurons*2, return_sequences=False))
    model.add(Dropout(dropout))
    
    model.add(Dense(units=output_size))
    model.add(Activation(activ_func))
    
    model.compile(loss=loss, optimizer=optimizer)
    
    model.summary()
    return model


def many_LSTM10():
    global_start_time = time.time()
    windows = [10, 20]
    into_the_future = 10
    start = 0
    count = 1
    
    for stock_nbr in range(1):
        for window in windows:
            for batch_size in [1, 2, 10, 50, 100]:
                for epoch in [1, 2]:
                    """"Define the input data for the 10 day LSTM prediction"""
                    LSTM10_train_input, LSTM10_train_output = create_LSTM_dataset(LSTM_train_list[stock_nbr].values, window)
                    LSTM10_test_input, LSTM10_test_output = create_LSTM_dataset(LSTM_test_list[stock_nbr].values, window)
                
                    '''reshape the input to be [samples, time steps, features]'''
                    LSTM10_test_input = np.reshape(LSTM10_test_input, (LSTM10_test_input.shape[0], 
                                                                       LSTM10_test_input.shape[1], 7))
                    LSTM10_train_input = np.reshape(LSTM10_train_input, (LSTM10_train_input.shape[0], 
                                                                         LSTM10_train_input.shape[1], 7))
                
                    print('===============================')
                    print('(Stock, window, batch_size, epoch)')
                    print('{0}, {1}, {2}, {3}'.format(get_ticker(LSTM_train_list[stock_nbr]), 
                                                      window, batch_size, epoch))
                    print('Run: {0} ({1})'.format(count, 1*2*len([1, 2, 10, 50, 100])*2))
                    print('===============================')
                
                    # Random seed for reproducibility
                    np.random.seed(202)
                    model_10 = LSTM10_model2(LSTM10_train_input, output_size = 1, neurons=50)
                    trained_LSTM10 = model_10.fit(LSTM10_train_input, LSTM10_train_output, epochs=epoch, 
                                                  batch_size=batch_size, verbose=1, 
                                                  shuffle=True, validation_split=0.05)
                
                    #plot_error(trained_LSTM10)
                    trainScore = model_10.evaluate(LSTM10_train_input, LSTM10_train_output, verbose=0)
                    testScore = model_10.evaluate(LSTM10_test_input, LSTM10_test_output, verbose=0)
                    print("Mean Squared Error on the training data: {0:0.6f}".format(trainScore)) 
                    print("Mean Squared Error on the test data:     {0:0.6f}".format(testScore))
                
                    tickr = get_ticker(LSTM_train_list[stock_nbr])
                    LSTM10_predictions = predict_multiple_sequences(model_10, LSTM10_test_input[::-1], 
                                                                    window, into_the_future)
                
                    plot_long_pred(LSTM10_predictions, LSTM_test_list[stock_nbr], into_the_future,
                           title='10 day predictions on test set, ' + tickr, xlabel='Date', ylabel='Price')
                
                    count += 1
                    print('==================================================================')
    print('==================================================================')
    print('Total training time (s): {0:0.0f}'.format(time.time()-global_start_time))
    
    
In [83]:
#many_LSTM10()
In [84]:
###### from keras import regularizers

# Define the LSTM model
#def LSTM10_model2(inputs, output_size, neurons, activ_func="linear",
#                dropout=0.2, loss="mean_squared_error", optimizer="rmsprop"):
#    model = Sequential()

#    model.add(LSTM(neurons, input_shape=(inputs.shape[1], inputs.shape[2]), return_sequences=True))
#    model.add(Dropout(dropout))
    
#    model.add(LSTM(neurons*2, return_sequences=False))
#    model.add(Dropout(dropout))
    
#    model.add(Dense(units=output_size))
#    model.add(Activation(activ_func))
    
#    model.compile(loss=loss, optimizer=optimizer)
    
#    model.summary()
#    return model


#def many_LSTM10():
#    global_start_time = time.time()
#    windows = [10, 20]
#    into_the_future = 10
#    start = 0
#    count = 1
#    stocks_to_change = [5, 12, 30, 49, 51, 61, 54]
    
#    for stock_nbr in stocks_to_change:
#        for window in windows:
#            for batch_size in [1, 150, 200]:
#                for epoch in [5, 10]:
#                    """"Define the input data for the 10 day LSTM prediction"""
#                    LSTM10_train_input, LSTM10_train_output = create_LSTM_dataset(LSTM_train_list[stock_nbr].values, window)
#                    LSTM10_test_input, LSTM10_test_output = create_LSTM_dataset(LSTM_test_list[stock_nbr].values, window)
                
#                    '''reshape the input to be [samples, time steps, features]'''
#                    LSTM10_test_input = np.reshape(LSTM10_test_input, (LSTM10_test_input.shape[0], LSTM10_test_input.shape[1], 7))
#                    LSTM10_train_input = np.reshape(LSTM10_train_input, (LSTM10_train_input.shape[0], LSTM10_train_input.shape[1], 7))
                
#                    print('===============================')
#                    print('(Stock, window, batch_size, epoch)')
#                    print('{0}, {1}, {2}, {3}'.format(get_ticker(LSTM_train_list[stock_nbr]), window, batch_size, epoch))
#                    print('Run: {0} ({1})'.format(count, len(stocks_to_change)*2*len([1, 150, 200])*2))
#                    print('===============================')
                
#                    # Random seed for reproducibility
#                    np.random.seed(202)
#                    model_10 = LSTM10_model2(LSTM10_train_input, output_size = 1, neurons=50)
#                    trained_LSTM10 = model_10.fit(LSTM10_train_input, LSTM10_train_output, epochs=epoch, 
#                                                   batch_size=batch_size, verbose=1, shuffle=True, validation_split=0.05)
                
#                    #plot_error(trained_LSTM10)
#                    trainScore = model_10.evaluate(LSTM10_train_input, LSTM10_train_output, verbose=0)
#                    testScore = model_10.evaluate(LSTM10_test_input, LSTM10_test_output, verbose=0)
#                    print("Mean Squared Error on the training data: {0:0.6f}".format(trainScore)) 
#                    print("Mean Squared Error on the test data:     {0:0.6f}".format(testScore))
                
#                    tickr = get_ticker(LSTM_train_list[stock_nbr])
#                    LSTM10_predictions = predict_multiple_sequences(model_10, LSTM10_test_input[::-1], window, into_the_future)
                
#                    plot_long_pred(LSTM10_predictions, LSTM_test_list[stock_nbr], into_the_future,
#                           title='10 day predictions on test set, ' + tickr, xlabel='Date', ylabel='Price')
                
#                    count += 1
#                    print('==================================================================')
#    print('==================================================================')
#    print('Total training time (s): {0:0.0f}'.format(time.time()-global_start_time))
    

Optimal tuning parameters

The optimal tuning parameters are stored in the algorithm_tunings array. (Stock: Window size, batch size, number of epochs)

In [86]:
#(Stock: Window size, batch size, number of epochs)

algorithm_tunings = {'ACAN-B.ST': [10, 1, 1], 'ANOD-B.ST': [10, 50, 1], 'ADDT-B.ST':[10, 2, 1],'AOI.ST':[10, 50, 1],
                    'AQ.ST':[10, 50, 1], 'ARCM.ST':[20, 100, 1], 'BEIA-B.ST': [20, 50, 1], 'BEIJ-B.ST': [10, 50, 1],
                    'BIOG-B.ST':[20, 10, 1], 'BIOT.ST':[10, 50, 1], 'PXXS-SDB.ST':[20, 2, 1], 'BULTEN.ST':[20, 50, 1],
                    'BURE.ST':[20, 50, 2], 'BMAX.ST':[10, 100, 2], 'CAT-A.ST':[10, 50, 1], 'CAT-B.ST': [10, 100, 2],
                    'CATE.ST': [10, 100, 2], 'CCC.ST': [10, 10, 1], 'CEVI.ST': [10, 50, 1], 'CLAS-B.ST': [20, 50, 2],
                    'CLA-B.ST': [10, 100, 2], 'COIC.ST': [20, 50, 1], 'CRED-A.ST':[20, 100, 1], 'DIOS.ST':[20, 50, 1],
                    'DUNI.ST':[10, 100, 2], 'ELAN-B.ST':[20, 100, 1], 'ENQ.ST': [10, 10, 1], 'FAG.ST':[20, 100, 1],
                    'FPAR.ST':[20, 50, 1], 'G5EN.ST':[20, 2, 1], 'GUNN.ST':[20, 50, 1], 'HLDX.ST':[10, 10, 1],
                    'HMED.ST':[20, 100, 1], 'HEBA-B.ST':[20, 100, 2], 'HIQ.ST':[20, 100, 2], 'HMS.ST':[10, 50, 1],
                    'IAR-B.ST':[10, 50, 1], 'IVSO.ST':[10, 50, 2], 'KABE-B.ST':[10, 50, 1], 'KAHL.ST':[10, 100, 2],
                    'KARO.ST':[20, 50, 1], 'KNOW.ST':[10, 100, 1], 'LIAB.ST':[10, 100, 2], 'LUC.ST':[20, 10, 1],
                    'MVIR-B.ST':[10, 2, 2], 'MEKO.ST':[10, 100, 2], 'MSON-A.ST':[10, 100, 1],'MSON-B.ST':[10, 100, 2],
                    'MYCR.ST':[10, 100, 2], 'NMAN.ST':[10, 100, 1], 'NETI-B.ST':[20, 50, 2], 'NEWA-B.ST':[10, 100, 2],
                    'NOLA-B.ST':[10, 100, 1], 'OEM-B.ST':[10, 50, 1], 'OPUS.ST':[10, 100, 2], 'ORX.ST':[10, 1, 1],
                    'PROB.ST':[10, 10, 1], 'QLRO.ST':[20, 2, 2], 'RAY-B.ST':[10, 100, 2], 'REZT.ST':[10, 10, 2],
                    'SAS.ST':[10, 100, 2], 'SMF.ST':[10, 1, 2], 'SKIS-B.ST':[10, 50, 1], 'STAR-B.ST':[10, 2, 1],
                    'SWOL-B.ST':[20, 100, 1], 'SYSR.ST':[20, 50, 1], 'TETY.ST':[10, 100, 1], 'TRAC-B.ST':[20, 100, 2],
                    'VBG-B.ST':[20, 2, 1], 'VITR.ST':[10, 50, 1], 'XVIVO.ST':[20, 50, 2], 'ORES.ST':[20, 50, 2]}
In [87]:
def create_pred_path(ticker):
    """Create a file path to store file(s)"""
    base = '/Users/jakob/Desktop/Programming/Udacity Machine Learning Nano Degree/Capstone Project/Predictions/'
    return(base + ticker + '_Predictions' + '.csv') 

Make predictions for each stock

Make predictions for all the stocks using each individual algorithm. Save the predictions to both an array and a .csv file.

In [88]:
def make_all_10pred():
    all_predictions = {}
    global_start_time = time.time()
    into_the_future = 10
    count = 1
    
    print('Creating predictions... ')
    print()
    
    for stock_nbr in range(len(LSTM_train_list)):
        
        stock_ticker = get_ticker(LSTM_train_list[stock_nbr])
        stock_info = algorithm_tunings[stock_ticker]
        window, batch_size, epoch = stock_info[0], stock_info[1], stock_info[2]
        
        print('===============================')
        print('(Stock, window, batch_size, epoch)')
        print('{0}, {1}, {2}, {3}'.format(stock_ticker, window, batch_size, epoch))
        print('Count: {0} ({1})'.format(count, len(LSTM_train_list)))
        print('===============================')
        
        
        """"Define the input data for the 10 day LSTM prediction"""
        LSTM10_train_input, LSTM10_train_output = create_LSTM_dataset(LSTM_train_list[stock_nbr].values, window)
        LSTM10_test_input, LSTM10_test_output = create_LSTM_dataset(LSTM_test_list[stock_nbr].values, window)
        
        '''reshape the input to be [samples, time steps, features]'''
        LSTM10_test_input = np.reshape(LSTM10_test_input, (LSTM10_test_input.shape[0], LSTM10_test_input.shape[1], 7))
        LSTM10_train_input = np.reshape(LSTM10_train_input, (LSTM10_train_input.shape[0], LSTM10_train_input.shape[1], 7))
        
        # Random seed for reproducibility
        np.random.seed(202)
        model_10 = LSTM10_model(LSTM10_train_input, output_size = 1, neurons=50)
        trained_LSTM10 = model_10.fit(LSTM10_train_input, LSTM10_train_output, epochs=epoch, 
                                      batch_size=batch_size, verbose=1, shuffle=True, validation_split=0.05)
                
        #plot_error(trained_LSTM10)
        trainScore = model_10.evaluate(LSTM10_train_input, LSTM10_train_output, verbose=0)
        testScore = model_10.evaluate(LSTM10_test_input, LSTM10_test_output, verbose=0)
        print("Mean Squared Error on the training data: {0:0.6f}".format(trainScore)) 
        print("Mean Squared Error on the test data:     {0:0.6f}".format(testScore))        
        
        #tickr = get_ticker(LSTM_train_list[stock_nbr])
        LSTM10_predictions = predict_multiple_sequences(model_10, LSTM10_test_input[::-1], window, into_the_future)
        
        # Save the predictions to a .csv file
        pd.DataFrame(LSTM10_predictions).to_csv(create_pred_path(stock_ticker))
        
        # Save the predictions to an array
        all_predictions[stock_ticker] = LSTM10_predictions 
        
        count +=1
        print('======================================================================================')    
        print('======================================================================================') 
        print()
    print('...Done!')
    print('Total run time (s): {0:0.0f}'.format(time.time()-global_start_time)) 
    return all_predictions
    
    
In [89]:
# predictions_10 contains all the predictions for each stock. (The run will take roughly an hour.)
#predictions_10 = make_all_10pred()
In [90]:
#print(pd.DataFrame(predictions_10['ACAN-B.ST']))

Read the predictions from the .csv files

In [91]:
def get_predictions():
    di = '/Users/jakob/Desktop/Programming/Udacity Machine Learning Nano Degree/Capstone Project/Predictions/'
    filePaths = glob(di+"*.csv")  # Get each .csv file in the directory

    # Get all the file names
    file_names_pred = []
    for root, dirs, files in os.walk(di):  
        for filename in files:
            filename = filename[:-4]   # Just keep the ticker name, without the .csv file extention
            file_names_pred.append(filename)
    del file_names_pred[0]

    # Get the predictions from the .csv files
    predictions = []
    for i in range(len(filePaths)):
        predi = pd.read_csv(filePaths[i])
        predi.drop(predi.columns[[0]], axis=1, inplace=True)
        predi.index.names = [file_names_pred[i]]
        predictions.append(predi)

    return predictions
        
predictions_10 = get_predictions()
In [92]:
def calc_accuracy(predictions, true_data):
    # predictions is a 2D dataframe with predicted values. true_data is a dataframe with the true data
    correct, not_correct = 0, 0
    rows, columns = predictions.shape[0], predictions.shape[1]
    for row in range(rows):
        if (predictions.iloc[row,0] < predictions.iloc[row, -1]) and (true_data.iloc[row*columns-columns, 4] < true_data.iloc[row*columns, 4]):  
            correct += 1
        elif (predictions.iloc[row,0] > predictions.iloc[row, -1]) and (true_data.iloc[row*columns-columns, 4] > true_data.iloc[row*columns, 4]):
            correct += 1
        else:
            not_correct += 1
    return (correct / rows)


def get_top_predictions(top_x, pred_list, true_data_list):
    """top_x is the amount of top predictions desired to be returned by the function. 
        pred_list is a list containing all the predictions for all the stocks. 
        true_data_list is a list containing all the true data, stored in df"""
    cor_inc_pred = {}
    for pred in pred_list:
        row, col = pred.shape[0], pred.shape[1]
        tkr = get_ticker(pred)[:-6]
        true_data = get_stock(true_data_list, tkr)[::-1]
        if (pred.iloc[-1, -1] > pred.iloc[-1, 0]) and (true_data.iloc[row*col, 4] > true_data.iloc[row*col-col, 4]):
            cor_inc_pred[tkr] = ((pred.iloc[-1, -1] - pred.iloc[-1, 0]) / pred.iloc[-1, 0])
            
    sorted_cor_inc_pred = sorted(cor_inc_pred.items(), key=operator.itemgetter(1), reverse=True)
    if len(sorted_cor_inc_pred) < top_x:
        print('Fewer correct top predictions were found than asked for. Returning all that were found.\n')
    return sorted_cor_inc_pred[:top_x]
    


def plot_all_10pred(predictions):
    """Plot all the predicted values"""
    into_the_future, good_acc = 10, 0
    for predi in predictions:
        tkr = get_ticker(predi)[:-6]
        true_values = get_stock(LSTM_test_list, tkr)
        plot_long_pred(predi.values.tolist(), true_values, into_the_future,
                       title='10 day predictions on test set, ' + tkr, xlabel='Date', ylabel='Price')
        accuracy = calc_accuracy(predi, true_values)
        if accuracy >= 0.6:
            good_acc += 1
            
        print('Accuracy score for {0}: {1:0.2f}%'.format(tkr, accuracy*100))
        print('==================================================================================================') 
    print('==================================================================================================')
    print()
    print('Total number of satisfying predictions (over 60%): {0} out of {1}'.format(good_acc, len(predictions)))
    
In [93]:
plot_all_10pred(predictions_10)
Accuracy score for ACAN-B.ST: 60.00%
==================================================================================================
Accuracy score for ADDT-B.ST: 52.00%
==================================================================================================
Accuracy score for ANOD-B.ST: 52.00%
==================================================================================================
Accuracy score for AOI.ST: 60.00%
==================================================================================================
Accuracy score for AQ.ST: 52.00%
==================================================================================================
Accuracy score for ARCM.ST: 47.83%
==================================================================================================
Accuracy score for BEIA-B.ST: 50.00%
==================================================================================================
Accuracy score for BEIJ-B.ST: 64.00%
==================================================================================================
Accuracy score for BIOG-B.ST: 50.00%
==================================================================================================
Accuracy score for BIOT.ST: 56.00%
==================================================================================================
Accuracy score for BMAX.ST: 48.00%
==================================================================================================
Accuracy score for BULTEN.ST: 41.67%
==================================================================================================
Accuracy score for BURE.ST: 41.67%
==================================================================================================
Accuracy score for CAT-A.ST: 68.00%
==================================================================================================
Accuracy score for CAT-B.ST: 44.00%
==================================================================================================
Accuracy score for CATE.ST: 52.00%
==================================================================================================
Accuracy score for CCC.ST: 48.00%
==================================================================================================
Accuracy score for CEVI.ST: 48.00%
==================================================================================================
Accuracy score for CLA-B.ST: 56.00%
==================================================================================================
Accuracy score for CLAS-B.ST: 33.33%
==================================================================================================
Accuracy score for COIC.ST: 58.33%
==================================================================================================
Accuracy score for CRED-A.ST: 54.17%
==================================================================================================
Accuracy score for DIOS.ST: 66.67%
==================================================================================================
Accuracy score for DUNI.ST: 40.00%
==================================================================================================
Accuracy score for ELAN-B.ST: 45.83%
==================================================================================================
Accuracy score for ENQ.ST: 52.00%
==================================================================================================
Accuracy score for FAG.ST: 41.67%
==================================================================================================
Accuracy score for FPAR.ST: 45.83%
==================================================================================================
Accuracy score for G5EN.ST: 50.00%
==================================================================================================
Accuracy score for GUNN.ST: 33.33%
==================================================================================================
Accuracy score for HEBA-B.ST: 41.67%
==================================================================================================
Accuracy score for HIQ.ST: 41.67%
==================================================================================================
Accuracy score for HLDX.ST: 56.00%
==================================================================================================
Accuracy score for HMED.ST: 62.50%
==================================================================================================
Accuracy score for HMS.ST: 52.00%
==================================================================================================
Accuracy score for IAR-B.ST: 56.00%
==================================================================================================
Accuracy score for IVSO.ST: 56.00%
==================================================================================================
Accuracy score for KABE-B.ST: 48.00%
==================================================================================================
Accuracy score for KAHL.ST: 40.00%
==================================================================================================
Accuracy score for KARO.ST: 45.83%
==================================================================================================
Accuracy score for KNOW.ST: 60.00%
==================================================================================================
Accuracy score for LIAB.ST: 36.00%
==================================================================================================
Accuracy score for LUC.ST: 37.50%
==================================================================================================
Accuracy score for MEKO.ST: 44.00%
==================================================================================================
Accuracy score for MSON-A.ST: 44.00%
==================================================================================================
Accuracy score for MSON-B.ST: 52.00%
==================================================================================================
Accuracy score for MVIR-B.ST: 60.00%
==================================================================================================
Accuracy score for MYCR.ST: 40.00%
==================================================================================================
Accuracy score for NETI-B.ST: 41.67%
==================================================================================================
Accuracy score for NEWA-B.ST: 52.00%
==================================================================================================
Accuracy score for NMAN.ST: 56.00%
==================================================================================================
Accuracy score for NOLA-B.ST: 44.00%
==================================================================================================
Accuracy score for OEM-B.ST: 56.00%
==================================================================================================
Accuracy score for OPUS.ST: 52.00%
==================================================================================================
Accuracy score for ORES.ST: 37.50%
==================================================================================================
Accuracy score for ORX.ST: 52.00%
==================================================================================================
Accuracy score for PROB.ST: 52.00%
==================================================================================================
Accuracy score for PXXS-SDB.ST: 41.67%
==================================================================================================
Accuracy score for QLRO.ST: 50.00%
==================================================================================================
Accuracy score for RAY-B.ST: 56.00%
==================================================================================================
Accuracy score for REZT.ST: 52.00%
==================================================================================================
Accuracy score for SAS.ST: 52.00%
==================================================================================================
Accuracy score for SKIS-B.ST: 56.00%
==================================================================================================
Accuracy score for SMF.ST: 56.00%
==================================================================================================
Accuracy score for STAR-B.ST: 48.00%
==================================================================================================
Accuracy score for SWOL-B.ST: 45.83%
==================================================================================================
Accuracy score for SYSR.ST: 41.67%
==================================================================================================
Accuracy score for TETY.ST: 56.00%
==================================================================================================
Accuracy score for TRAC-B.ST: 58.33%
==================================================================================================
Accuracy score for VBG-B.ST: 58.33%
==================================================================================================
Accuracy score for VITR.ST: 64.00%
==================================================================================================
Accuracy score for XVIVO.ST: 29.17%
==================================================================================================
==================================================================================================

Total number of satisfying predictions (over 60%): 9 out of 72
In [94]:
top_pred = get_top_predictions(5, predictions_10, LSTM_test_list)
print(top_pred)
[('REZT.ST', 0.16184365216011876), ('TETY.ST', 0.14915278453385278), ('BMAX.ST', 0.13797540005218606), ('CAT-A.ST', 0.099748663312861585), ('CEVI.ST', 0.064334982968075119)]